AI Technology RadarAI Technology Radar

OWASP Top 10 for LLM

securityevaluation
Adopt

The OWASP Top 10 for Large Language Model (LLM) Applications is a security framework highlighting the most critical risks when deploying LLMs in production. It provides guidance to AI engineers and developers for identifying and mitigating vulnerabilities in LLM-based systems.

You should regulary check the latest version on the official OWASP website and apply the recommendations and mitigations strategies.

Summary of the OWASP LLM Top 10 (2025):

  1. LLM01:2025 Prompt Injection A Prompt Injection Vulnerability occurs when user prompts alter the behavior of the LLM, resulting in data leakage, privilege escalation, or unwanted actions. See Prompt Injection Awareness.

  2. LLM02:2025 Sensitive Information Disclosure Sensitive information can affect both the LLM and its application context. This includes personal identifiable information (PII), financial details, health records, confidential business data, security credentials, and legal documents

  3. LLM03:2025 Supply Chain LLM supply chains are susceptible to various vulnerabilities, which can affect the integrity of training data, models, and deployment platforms.

  4. LLM04:2025 Data and Model Poisoning Data poisoning occurs when pre-training, fine-tuning, or embedding data is manipulated to introduce vulnerabilities, backdoors, or biases

  5. LLM05:2025 Improper Output Handling Improper Output Handling refers specifically to insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems. Treat the model as any other user, adopting a zero-trust approach, and apply proper input validation on responses coming from the model to backend functions.

  6. LLM06:2025 Excessive Agency An LLM-based system is often granted a degree of agency by its developer – the ability to call functions or interface with other systems via extensions (sometimes referred to as tools, skills or plugins by different vendors) to undertake actions in response to a prompt. Excessive Agency is the vulnerability that enables damaging actions to be performed in response to unexpected, ambiguous or manipulated outputs from an LLM.

  7. LLM07:2025 System Prompt Leakage The system prompt leakage vulnerability in LLMs refers to the risk that the system prompts or instructions used to steer the behavior of the model can also contain sensitive information that was not intended to be discovered. It's important to understand that the system prompt should not be considered a secret, nor should it be used as a security control.

  8. LLM08:2025 Vector and Embedding Weaknesses Vectors and embeddings vulnerabilities present significant security risks in systems utilizing Retrieval Augmented Generation (RAG) with Large Language Models (LLMs). Weaknesses in how vectors and embeddings are generated, stored, or retrieved can be exploited by malicious actions (intentional or unintentional) to inject harmful content, manipulate model outputs, or access sensitive information.

  9. LLM09:2025 Misinformation Misinformation from LLMs poses a core vulnerability for applications relying on these models. Misinformation occurs when LLMs produce false or misleading information that appears credible. This vulnerability can lead to security breaches, reputational damage, and legal liability. One of the major causes of misinformation is hallucination.

  10. LLM10:2025 Unbounded Consumption Unbounded Consumption occurs when a Large Language Model (LLM) application allows users to conduct excessive and uncontrolled inferences, leading to risks such as denial of service (DoS), economic losses, model theft, and service degradation.

References and Further Reading:

Related AI Radar Topics