Answer yes if you have controls in place to ensure that sensitive data is not processed by AI. This could include any data that has been defined by your customers as confidential, for example intellectual property or commercially sensitive information, in addition to personal information. These controls may include identifying and pre-treating data before AI processing or other controls to prevent the input of certain information. Please describe how this is achieved in the notes section.
Why should I have it?
Some regulated and public sector clients have specific requirements to control the use of sensitive data, although the general perception of novel data processing technologies like AI can create a risk-averse reaction and concerns about uncontrolled disclosure of information in many sectors.
Opportunities to pre-process and treat sensitive data can provide assurance to clients of the integrity of your AI-enabled service, complimenting their data loss prevention needs.
Naturally there are practical limits to the capability to parse and redact or mask sensitive data in LLM prompts or ingested database fields. There may also be an expectation to replace sensitive data fields or phrases with reference strings prior to processing - then replace these with their original strings in the post-processing response.
In all cases, assurance of safe handling of data in the service-side pre- and post-processing systems will likely be necessary.
Requirements for treatment will be broadly indicated by regulatory mandates - or in discussion with client needs during AI service development.
If you would like to contribute to this article or provide feedback, please email knowledge@riskledger.com. Contributors will be recognised on our contributors page.