Answer yes if any of the AI-supported services used by your organisation use your organisational data and information to train the AI models. Where information is used for AI training, describe any controls you have in place to mitigate risks related to your confidential or sensitive data being stored and re-used.
Consider the operational workflows in which AI processing is applied. Build opportunities for review into those workflows where practicable, and ensure service users are aware of the limitations of AI processing capability, examples of processing flaws, and consequential risks of using that information without review.
When working with generated metrics, consider presentation of the outputs in graphical format to more easily identify anomalous results and outliers compared with expected trends. Where processing is used to identify outliers to trigger remedial processes (e.g. in ICT process control or dynamic network/computing capacity control), consider if additional data processing and controls can be applied to moderate or verify AI results prior to a system taking action.
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.