Network Trace

How Will Generative AI Impact Supply Chain Cyber Security?

Generative AI is a double-edged sword. On the one hand, it will help security teams by adding its advanced data analysis power to enhance threat intelligence and analysis.On the other hand, the majority of security professionals attribute the significant overall increase in cyber attacks since 2023. However, despite over half (53%) of organisations acknowledging generative AI as a risk, just over a third (38%) have so far taken steps to mitigate against it.

Our 5 part series provides you with a comprehensive understanding of generative AI, how it will impact your supply chain security and some practical recommendations you can follow to harness its impact while avoiding its pitfalls.

Canvas

Part 1: The (In)Security Guide to Generative AI and Large Language Models

Key takeaways:

  • Large Language Models (LLMs) such as ChatGPT and generative AI more generally are here to stay and are already impacting most organisations, as LLMs are being rapidly integrated in many business critical software and platforms worldwide.
  • The first data breach of an LLM, of ChatGPT in March 2023, demonstrated that LLMs and their developers are equally as liable to be breached via their supply chains as any other organisation or software provider.
  • As LLMs advance in maturity and adoption becomes even more widespread and complete, organisations face growing risks to the confidentiality, integrity and availability of their data and systems.
  • Although several countries are currently working on new regulatory regimes for generative AI, it is already falling to companies and their security teams to put guardrails and standards in place for how to integrate LLMs into their processes and workflows securely, and to ensure that any of their business critical third party suppliers have also done so.

Part 2:The Brave New World of Generative AI and
Large Language Models (LLMs)

Key takeaways:

  • Large Language Models are one type of generative AI, but the meaning of the term remains ambiguous.
  • The evolving LLM ecosystem, while seemingly large and diverse, is dominated by a few main players and their foundation models, creating potential concentration risk in the event of future breaches.
  • The use cases and integrations of LLMs are proliferating fast, with many exciting additional use cases conceivable in the future, especially in areas such as research and development, software engineering, marketing and customer services. 
  • LLMs are best described as stochastic parrots, and are a long way off from evolving into any artificial general intelligence or developing any sentience and related perceived threats.
  • The current hype around the potential dangers of AI to humanity, often made by proponents of LLMs themselves, are largely overblown and distract from more important and immediate risk discussions to be had.

Part 3: Cyber Security Risks of
Generative AI and Large Language Models

Key takeaways:

  • An in-house LLM poses significant risks to the Confidentiality, Integrity and Availability (CIA) of organisations’ systems and data. ChatGPT can pose such a risk to in the case of an attack on a browser inside your organisation or the aforementioned API code you have written.
  • In-house LLMs are subject to potential data leakage and other privacy concerns that could impact the confidentiality of organisations’ data, including staff and proprietary business information.
  • Any LLM is subject to prompt injection attacks, which could be used by attackers to undermine the confidentiality and availability of the systems and data of organisations that have integrated LLMs into their workflows and systems, either directly or indirectly through third party applications that have done so.
  • Model bias and data poisoning can also affect the integrity of the data outputs of LLMs.
  • LLMs, through their software stacks and website interfaces, are also subject to denial of service attacks, which could affect the availability of services that organisations have integrated into business workflows.

Part 4: The Generative AI and LLM Revolution:
Opportunities for Cyber Security Professionals

Key takeaways:

  • While bearing many risks, generative AI and large language models also present new opportunities to help cyber security teams bolster their organisations’ cyber defences and achieve significant productivity and efficiency gains
  • With their ability to analyse massive amounts of data, pull out key data and generate meaningful responses and summaries, LLMs could support security teams in monitoring and interpreting otherwise overwhelming threat intelligence data and security alerts.
  • LLMs could be used to generate incident scenarios and playbooks for cyber security teams, to be used in red team and tabletop exercises.
  • Microsoft Security Copilot, which has already integrated GPT-4, specifically tailored to cyber security use cases, supporting the work of cyber security analysts during and after incidents. It allows users to take advantage of Copilot’s processing of collected daily by Microsoft's threat intelligence team as well as NIST and CISA and make them available as responses to simple LLM prompts.

Part 5: Using Generative AI and Large Language Models Securely

Key takeaways:

  • Best practices and recommendations for cyber security professionals: a tactical Guide for CISOs
  • What questions you should be asking to your suppliers
  • How to run an assessment to better understand how LLMs are being used in your organisation
  • 2 page handout: Risks and Challenges of Generative AI
Pattern Trapezoid Mesh
Data Insights Report

Download for free

By submitting this form, you agree to Risk Ledger’s Terms of Service, Privacy Policy, and Risk Ledger contacting you.

Thank you!
Download
Oops! Something went wrong while submitting the form.
Data Insights Report

Download for free

Download