Answer yes if you have ensured, as far as you are able, that your people (employees, managed contract resources or anyone else acting on behalf of your organisation) have reviewed and evaluated the AI model output before use. These processes should help mitigate the risks arising from inaccuracies or ‘hallucinations’ (plausible created statements) within AI outputs which, if applied without human review, can impact integrity and mislead decision-making.
Consider the operational workflows in which AI processing is applied. Build opportunities for review into those workflows where practicable, and ensure service users are aware of the limitations of AI processing capability, examples of processing flaws, and consequential risks of using that information without review.
When working with generated metrics, consider presentation of the outputs in graphical format to more easily identify anomalous results and outliers compared with expected trends. Where processing is used to identify outliers to trigger remedial processes (e.g. in ICT process control or dynamic network/computing capacity control), consider if additional data processing and controls can be applied to moderate or verify AI results prior to a system taking action.
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.