Risk Management in the Era of Large Language Models and Generative AI

LLM risk management

Large Language Models (LLMs) and Generative AI are cutting-edge technologies in the field of artificial intelligence that are rapidly evolving in the business landscape. LLMs are a subset of Generative AI, focusing specifically on language-related tasks. While related, LLMs refer to AI systems capable of understanding and generating human-like text based on large datasets. Generative AI (Gen-AI), on the other hand, encompasses a broader category of AI systems capable of creating new content, including text, images, and audio, that mimics human creativity.

This blog post will break down the security implications of the usage of LLMs and Gen-AI in today’s business landscape. We’ll talk about risk management and best practices associated with the usage of LLMs and Gen-AI. Organizations should be aware of and leverage guidance provided by OWASP, a nonprofit foundation that works to improve the security of software and is well known for the OWASP Top 10, but recently published the OWASP Top 10 for LLM Applications.

Large Language Models: Power & Problems

To make smart decisions about using Large Language Models (LLMs), it’s important to know the types of risks they bring and how they fit into the business plan. This way, organizations can balance the good and bad sides of using LLMs and make sure they help the business succeed instead of holding it back.

LLMs and their applications create expanded attack surfaces and make it easier for bad actors to attack organizations. Some risks are new, but others are the same old problems, like knowing what software you’re using, keeping data safe, and controlling who can access it.

Generative AI: Creation & Complexity

Generative AI, like magic, can create things seemingly out of thin air, whether it’s writing stories, generating images, or even crafting realistic-sounding conversations. But just like magic, it has its dangers. Imagine if a threat actor used this magic to create fake news articles, misleading images, or even malicious software. That’s the kind of trouble generative AI can cause.

One big worry is that it’s hard to tell what’s real and what’s fake when generative AI is involved. This could lead to all sorts of problems, like powering phishing campaigns or other social engineering attacks. Another concern is privacy. Generative AI needs lots of data to work its magic, and that data could include personal stuff about users or organizations. If it falls into the wrong hands, it could be used for all sorts of mischief, from identity theft to blackmail. So while generative AI can do amazing things, organizations need to be careful and think about how to keep it from causing trouble and potentially abusing the trust organizations place in business partners.

 

LLM & Gen-AI security concerns

Security Implications of LLMs & Gen-AI

First of all, there are significant data privacy concerns associated with the usage of LLMs and Gen-AI. Two notable privacy considerations are:

  1. Risks associated with the vast amount of data required for training.
  2. The potential for unintentional data leaks and breaches.

Both training data, as well as the data fed into an LLM or Gen-AI tool, can inadvertently contain sensitive information, so building guardrails through policy and technical controls is an important factor organizations must consider.

Misinformation and fake content are also significant issues associated with LLMs and Gen-AI. If a tool is not appropriately trained (and let’s face it, most are not), it will produce false information (very confidently) which could lead organizations to leverage false information provided by these tools. Organizations must consider the challenge of distinguishing between genuine and generated content, as well as the potential impact on public discourse and trust should things go awry.

Manipulation and social engineering are also an area of significant concern. The ability to craft persuasive and deceptive messages has led to an increase in the quality (and velocity) of phishing attacks and other social engineering activities. Threat actors are able to leverage these tools to bypass traditional protections associated with email security, as an example.

Ethical Considerations & Guardrails

Ethical considerations in LLM and AI development by organizations involve thinking about what’s right or wrong when creating or leveraging these powerful technologies. For example, developers need to consider how their creations might affect privacy. They also need to think about biases – unfair preferences – that might show up in the AI’s decisions or language. Being ethical in AI development means being responsible and thinking about the consequences of what you create.

Further, implementing robust security measures means making sure that LLMs and AI systems are protected from malicious threats, like hackers stealing information or using AI to do harm. Another part of robust security is checking and double-checking to make sure that the AI is doing what it’s supposed to do and not being tricked into doing something harmful. By putting these measures in place, organizations can help ensure that LLMs and AI are used safely and responsibly.

 

Success factors for AI adoption

LLM & Gen-AI Adoption Strategies

OWASP has provided excellent guidance to the industry through the publication of its LLM and AI Security Governance Checklist, which defines a six-step process for creating a deployment strategy for LLMs and Gen-AI tools.

The adoption strategies defined by OWASP include critical steps such as threat modeling, business process reviews, collaboration with internal stakeholders, third-party risk management, and other practices. For organizations just stepping into the world of LLMs and Gen-AI, this resource is a must-read and an excellent first step in assisting the organization to develop policies and procedures associated with LLM and Gen-AI usage.

Conclusion: What Your Auditor Wants to Know

There are a few key questions your auditor may ask you about your usage or adoption of LLM and Gen-AI capabilities in your environment. We’ll present the following to help you consider how effectively you have assessed these in your environment.

  1. For LLM and Gen-AI tools, how are you assessing and addressing data privacy considerations, both with data being used to train models, or with the data that is fed into models?
  2. How has your organization developed effective policies and procedures associated with LLM and Gen-AI usage within the organization?
  3. What controls has your organization implemented to counter new and ever-evolving threats associated with the usage of LLMs and Gen-AI tools by threat actors?

If your organization is looking to demonstrate effective LLM and Gen-AI controls within the organization, elevate your information security posture with our expertise at Linford & Company, LLP. As experienced auditors and technologists, we understand the critical importance of implementing and assessing controls to safeguard your organization’s critical assets.

Ready to take the next step in your information security program and pursue a greater level of assurance for your clients and business partners? Reach out to us today, and let’s develop a roadmap for the assessment of your environment, including LLM and Gen-AI adoption and governance. Audits can seem complicated, but we simplify the process for our clients. Whether in support of SOC 2, HITRUST, FedRAMP, ISO, HIPAA, CMMC, or any requirements, your resilient and secure future begins with a simple conversation.