Answer yes if you have ensured, as far as you are able, that your people (employees, managed contract resources or anyone else acting on behalf of your organisation) have reviewed and evaluated the AI model output before use. These processes should help mitigate the risks arising from inaccuracies or ‘hallucinations’ (plausible created statements) within AI outputs which, if applied without human review, can impact integrity and mislead decision-making. Describe how you ensure this human review takes place in all circumstances or upload supporting documentation.
A regular review of how AI models and services are used within your organisation to provide services to clients — what workflows are supported, the client data used with those services, and how the results of AI processing are used — can help inform security and risk management controls
Typically an organisation’s Information Security function can assist, but there are numerous consultancies or individual consultants that will be able to assist in crafting a policy that meets your business and technical requirements.
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.