Network Trace
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

02) Has your organisation conducted a risk assessment of each internal use of AI models?

September 11, 2024
Artificial Intelligence
Internal AI application

Answer yes if your organisation has conducted and documented a regulatory compliance and security risk assessment for each AI-supported service that is used, including both internally developed and supplier-provided services. Examples of risk assessment considerations include: how a Large Language Model (LLM) service operates and is secured compared with the requirements of EU AI Act or the OWASP Top 10 for LLM, an evaluation of output accuracy or bias countermeasures, abuse prevention measures, and risk of Intellectual Property or Copyright infringement claims resulting from public use of AI-generated output.

What is the control?

Information Security is in large parts about risk management. Quite simply, we improve security by removing risks as best we can within a certain scope and level or resource.

The first step to being able to do so is assessing what risk your IT estate faces. Your organisation should therefore have a policy stating that risk assessments are to be performed as part of your risk management strategy. This policy should also define the scopes of these assessments.

Why should I have it?

Any significant change to your environment  — including external factors such as changes to legislation, best practice guidance, emerging threats, and changes of processing scope — should be subject to an assessment to determine its impact. Few changes are potentially as significant as those as services which change information processing, such as the use of AI models and services.

Where your organisation has adopted use of AI-supported services, it is important to evaluate the risks of disclosing your organisation’s data and information to each of those external services.

Use of Generative AI and Machine Learning is subject to regulation in some jurisdictions. In addition, the services and models have inherent vulnerabilities and potential for exploitation, undermining their integrity and the results of processing. Consideration should be given to the use of those processing results, including accuracy and risk of inherent bias that may influence decision-making.

How to implement the control

Adapt your risk management policy and process to include AI models in scope. The policy should dictate the circumstances and scope of risk assessment activities, as well as prescribe that your risk management processes identify risks - including what systems or data may be at risk, from what threats, because of what particular vulnerabilities, etc. — then evaluate those risks and lead to decisions on what controls or corrective measures should be put in place.

Record and track all of the above for accountability and auditability reasons, and review as necessary as part of ongoing risk management and continuous improvement efforts.

There are numerous consultancies or individual consultants that will be able to assist in crafting a policy and process that meets your business and technical requirements.

The following references may be helpful:

OWASP Top 10 for LLM: https://genai.owasp.org/

Regulation:  EU AI Act https://eur-lex.europa.eu/eli/reg/2024/1689/oj

Standard: ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System

Codes of practice:

  • ISO/IEC 23894:2023 Information Technology — Artificial Intelligence — Guidance on risk management
  • ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)

If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.

Pattern Trapezoid Mesh

Defend against supply chain attacks with Defend-As-One.

No organisation is an island.