Answer yes if your organisation has conducted and documented a regulatory compliance and security risk assessment for each AI-supported service that is used, including both internally developed and supplier-provided services. Examples of risk assessment considerations include: how the LLM service operates and is secured compared with the requirements of EU AI Act or the OWASP Top 10 for LLM, an evaluation of output accuracy or bias countermeasures, abuse prevention measures, and risk of Intellectual Property or Copyright infringement claims resulting from public use of AI-generated output. Upload supporting document(s) evidencing the assessment(s), or describe the assessment(s) in the notes section.
What is the control?
It’s important to clearly understand how data provided to AI models and services is used, particularly for suppliers that embed AI processing into their services where the full scope of processing may be unclear.
Why should I have it?
AI models and services are continuously striving to improve their services. One aspect of that improvement is the moderation and assessment of outputs for usability and accuracy compared with an understanding of the prompt that was provided - tuning their models. One strategy to achieve this is to incrementally or continuously re-train their models with new information provided by service users.
It’s important to consider the nature of data and information disclosed to an external AI-supported service and any controls that you need to apply to mitigate risks related to confidential or sensitive data being stored and re-used by that supplier.
Your change process should dictate an approval process for any new AI model or service — or any change in an existing AI model or service.
This approval process should include a security assessment for the full scope of the proposed AI processing in the context of its use in operational workflows. It should also be assured that data provided to models is restricted (as far as practicable) to limit what data is exposed to the AI model. AI model service contracts should be reviewed to determine any repurposing of user data to improve the service provider’s models. The nature data processing may be explicitly defined in contract or stated as implied by use of the services.
The change process should also inform other processes and actions such as the reconfiguration of data loss prevention policies and technical measures where risks exceed tolerance.
There are numerous consultancies or individual consultants that will be able to assist in crafting the correct data processing workflow in a way that meets your business and technical requirements.
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.