Answer yes if you have ensured, as far as you are able, that the users of your service have reviewed and evaluated the AI model output before use. The measures you have put in place should help mitigate the risks arising from inaccuracies or ‘hallucinations’ (plausible created statements) within AI outputs which, if applied without human review, can impact integrity and mislead decision-making. Depending on the service, this could include tagging output as 'AI generated' or providing workflows to enable the review. Upload a supporting document outlining how you have enabled this, or describe the measures you have put in place in the notes section.
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.