Answer yes if Machine Learning or Generative AI models are used anywhere within your organisation. This includes the use of AI features or capabilities embedded in your supplier’s services or any SaaS tools your people use (e.g. Google Gemini or Microsoft’s Copilot). Unless it has been specifically prohibited and the restriction technically enforced, it is likely that AI is being used somewhere within your organisation and you should answer yes to this question.
What is the control?
Information Security is in large parts about risk management. Quite simply, we improve security by removing risks as best we can within a certain scope and level or resource.
The first step to being able to do so is assessing what risk your IT estate faces. Your organisation should therefore have a policy stating that risk assessments are to be performed as part of your risk management strategy. This policy should also define the scopes of these assessments.
Why should I have it?
Any significant change to your environment — including external factors such as changes to legislation, best practice guidance, emerging threats, and changes of processing scope — should be subject to an assessment to determine its impact. Few changes are potentially as significant as those as services which change information processing, such as the use of AI models and services.
Where your organisation has adopted use of AI-supported services, it is important to evaluate the risks of disclosing your organisation’s data and information to each of those external services.
Use of Generative AI and Machine Learning is subject to regulation in some jurisdictions. In addition, the services and models have inherent vulnerabilities and potential for exploitation, undermining their integrity and the results of processing. Consideration should be given to the use of those processing results, including accuracy and risk of inherent bias that may influence decision-making.
Adapt your risk management policy and process to include AI models in scope. The policy should dictate the circumstances and scope of risk assessment activities, as well as prescribe that your risk management processes identify risks - including what systems or data may be at risk, from what threats, because of what particular vulnerabilities, etc. — then evaluate those risks and lead to decisions on what controls or corrective measures should be put in place.
Record and track all of the above for accountability and auditability reasons, and review as necessary as part of ongoing risk management and continuous improvement efforts.
There are numerous consultancies or individual consultants that will be able to assist in crafting a policy and process that meets your business and technical requirements.
The following references may be helpful:
Standard: ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System
Codes of practice:
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.