Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

What You Must Know About the OWASP Top 10 for LLM Applications 2025 update



Abstract image showing hands on computer keyboard with multicolored lines of light intersecting

As GenAI becomes a vital part of business operations, the risks it brings are evolving just as fast. Here are four key takeaways from the latest OWASP LLM guidance.

GenAI products are evolving at lightning speed and, with them, the security landscape is changing too. New threats are emerging, old ones are shifting and the risks are becoming harder to ignore. To keep up, the Open Web Application Security Project (OWASP) Foundation has updated its OWASP Top 10 for LLMs framework — and these updates are game-changing. If you’re working with GenAI, this is a must-read. We’re breaking down the most significant updates and sharing real-world examples straight from Fortune 500 companies we are working with!

What’s New in the OWASP Top 10 for LLMs 2025?

1. Proprietary algorithms and sensitive business data (LLM02:2025 Sensitive Information Disclosure)

Are you using a code copilot like GitHub Copilot? If so, this one’s for you.

OWASP now chooses to specifically highlight the risks of a company’s IP data exposure. As each line of code is sent to the code copilot, the company’s core technology and IP are exposed to threats. The company’s most sensitive algorithms are now at risk of exposure through backdoors (MITRE ATLAS — Backdoor ML Model: Poison ML Model) that might be embedded in the copilot’s training data and later triggered by adversarial inputs (as detailed by Embrace The Red’s blog GitHub Copilot Chat: From Prompt Injection to Data Exfiltration).

And it’s not just about deep-tech coding algorithms. While working with a Fortune 500 company in the investment sector, we found that users were accidentally sharing investment strategies directly with LLM-based chats, such as OpenAI’s ChatGPT and Google’s Gemini, putting the company’s sensitive information at risk.

2. Shifting right: Model poisoning (LLM04:2025 Data and Model Poisoning)

Previously focused solely on data poisoning, this category now includes model poisoning, expanding the spotlight to production risks.

As adversarial inputs have been documented as a pathway to privilege escalation in models, the focus has shifted right to prioritizing the security of models in live production environments.

From a shift-right perspective, model poisoning refers to the intentional manipulation or corruption of a deployed AI model’s performance through malicious inputs or adversarial interactions in production environments. Instead of targeting the model during training (as in traditional model poisoning), the attack focuses on introducing harmful patterns or behaviors in real-time.

3. Say hello to RAG: A new risk in the neighborhood (LLM08:2025 Vector and Embedding Weaknesses)

Attention, Microsoft Copilot and ChatGPT users!

This entirely new section focuses on Retrieval-Augmented Generation ( RAG), which is essentially a vector database. Key risks include:

  • Sensitive data exposure: Information retrieved from the RAG might be delivered to unauthorized users.
  • Cross-context information leakage: When the engine confuses user contexts within the same tenant, sensitive data can be inadvertently exposed.

RAG boosts the ability to get the organizational data that users are looking for, quickly. It means that unauthorized users might be able to get the restricted data that they are looking for quickly, too. Imagine a user from R&D accessing legal contracts, or an attacker compromising a lower level user to query the model for the company’s most secret IP — access to sensitive data through the model becomes easier, for good and for bad.

4. Hallucinations: When misinformation becomes destructive (LLM09:2025 Misinformation)

The wildly inaccurate yet confident responses generated by LLMs — commonly referred to as ‘hallucinations’ — are not only disruptive, they’re also recognized as risks. The former LLM09 Overreliance guidance has been updated, with OWASP now putting the spotlight on how hallucinations can mislead users and lead to decisions based on fabricated information.

Code copilots are particularly vulnerable here, sometimes recommending vulnerable or typo-squatted code packages, which could compromise a company’s codebase.
Because chat responses often sound authoritative and data-driven, it’s natural to trust them. However, relying on these responses without verification could land the company in serious legal trouble.

5. Operational disruption (LLM10:2025 Unbounded Consumption)

As GenAI becomes integral to business operations, disrupting its performance directly harms the company.

The former Denial of Service (DOS) guidance has expanded to Unbounded Consumption, encompassing scenarios such as:

  • DoW (Denial of Wallet): Attackers drive up model’s computing costs, bleeding the company dry.
  • Resource-Intensive Queries: Legitimate-looking inputs overwhelm systems, causing crashes.

If you thought cloud bills were high before, wait until an adversarial LLM query hits your budget. For example, by simply asking the LLM in a prompt to use a minimum number of tokens in its response, an attacker can cause your company to spend a significant amount of money (https://x.com/fabianstelzer/status/1868214722308219061)

Conclusion

The OWASP Top 10 for LLMs 2025 makes one thing clear: as GenAI becomes a vital part of business operations, the risks it brings are evolving just as fast. With AI products advancing and the technology behind them constantly shifting, the threat landscape is changing rapidly. Staying informed and keeping up with these risks isn’t just important — it’s essential to protecting your business.


Cybersecurity news you can use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.