Answer yes if you have ensured, as far as you are able, that the users of your service have reviewed and evaluated the AI model output before use. The measures you have put in place should help mitigate the risks arising from inaccuracies or ‘hallucinations’ (plausible created statements) within AI outputs which, if applied without human review, can impact integrity and mislead decision-making. Depending on the service, this could include tagging output as 'AI generated' or providing workflows to enable the review.
What is the control?
Where practicable, the output from AI processing should be reviewed by client service users prior to use in business operations to evaluate it’s integrity and usability for the intended purpose.
Why should I have it?
AI model responses may be the result of processing service user’s prompts, or an analysis of the client organisation’s data (e.g. to generate metrics). These responses can include inaccuracies and ‘hallucinations’ (plausible created statements) as anomalous artefacts of that processing. If AI model responses are applied without human review this can mislead your client’s decision-making or impact operational integrity.
Ensure service users are aware of the intended purpose of your service any limitations of AI processing capability, and consequential risks of using that information without review or outside of the service’s intended purpose.
Consider the operational workflows in which AI processing is intended to be applied by your client. Build opportunities for review into those workflows where practicable, for example using prompts or decision gates.
Depending on the service, service user awareness could include tagging output as 'AI generated' (as required by some regulation).
When working with generated metrics, consider presentation of the outputs in graphical format to enable your service users to more easily identify anomalous results and outliers compared with expected trends. Where processing is used to identify outliers to trigger remedial processes (e.g. in ICT process control automation or dynamic network/computing capacity control automation), consider if additional data processing and controls can be applied to moderate or verify AI results prior to a system taking action.
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.