Answer yes if your organisation has a formal change management process that includes a step to assess any security or legal compliance risks that the change may impact, requires a rollback plan, and includes processes for notifying relevant clients of the changes and any consequential processing differences. Change management can apply if either the AI model is updated, or the data applied to the model is changed (e.g. the model is applied to support new services processing different client data).
What is the control?
Information Security is in large parts about risk management. We improve security by removing risks as best we can within a certain scope and level or resource.
The first step to being able to do so is assessing what risk your service may pose to your organisation and the operations of your client’s organisation. Your should therefore have a policy stating that risk assessments are to be performed as part of your risk management strategy. This policy should also define the scope of these assessments to include AI services provided to clients.
Why should I have it?
Any significant change to your service delivery environment — including external factors such as changes to legislation, best practice guidance, emerging threats, and changes of processing scope — should be subject to an assessment to determine its impact. Few changes are potentially as significant as those as services which change information processing, such as the provision of AI models and services.
Where your clients have adopted use of AI-supported services, it is important to evaluate the risks of disclosing their organisation’s data and information to your service and any related sub-processors (e.g. where your service delivery integrates processing with an externally-provided AI model).
Use of Generative AI and Machine Learning is subject to regulation in some jurisdictions. In addition, the services and models have inherent vulnerabilities and potential for exploitation, undermining their integrity and the results of processing. Consideration should be given to the use of those processing results, including accuracy and risk of inherent bias that may influence decision-making.
Adapt your risk management policy and process to include AI service delivery in scope. The policy should dictate the circumstances (change management triggers) and scope of risk assessment activities, as well as prescribe that your risk management processes identify risks - including what systems or client data may be at risk, from what threats, because of what particular vulnerabilities, etc. — then evaluate those risks and lead to decisions on what controls or corrective measures should be put in place.
Record and track all of the above for accountability and auditability reasons, and review as necessary as part of ongoing risk management and continuous improvement efforts.
The change process should establish clear criteria for notifying your clients of proposed material changes to AI services.
There are numerous consultancies or individual consultants that will be able to assist in crafting a policy and process that meets your business and technical requirements.
The following references may be helpful:
OWASP Top 10 for LLM:https://genai.owasp.org/
Regulation: EU AI Acthttps://eur-lex.europa.eu/eli/reg/2024/1689/oj
Standard: ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System
Codes of practice:
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.