HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms.
Cybersecurity Snapshot: Top Guidance for Improving AI Risk Management, Governance and Readiness
Many organizations are playing catch-up in key AI security policy areas, such as usage governance, risk oversight, data protection, and staff training. In this Cybersecurity Snapshot special edition, we round up recent guidance on preparing for, managing and governing AI cyber risks.
Security for AI: A Practical Guide to Enforcing Your AI Acceptable Use Policy
An AI acceptable use policy can help your organization mitigate the risk of employees accidentally exposing sensitive data to public AI tools. Benchmark your organization’s policy against our best practices and discover how prompt-level visibility from Tenable AI Exposure eases policy enforcement.
Cybersecurity Awareness Month Is for Security Leaders, Too
Think you know all there is to know about cybersecurity? Guess again. Shadow AI is challenging security leaders with many of the same issues raised by other “shadow” technologies. Only this time, it’s evolving at breakneck speed.
Synack + Tenable: AI-Powered Partnership Translates Vulnerability Insights into Action
The combined Synack/Tenable solution reduces alert noise for overloaded security teams, isolating the most exploitable threats so they can proactively close security gaps faster.
Why Google’s Warning Highlights Critical Risk of AI Context-Injection Attacks
Google, with its unparalleled visibility into Gemini, recently alerted its legion of Gmail users about indirect prompt attacks, which exploit AI context sources like emails, calendar invites and files. Coming from a major AI vendor, the frank and direct public alert leaves no doubt that…
Tenable Jailbreaks GPT-5, Gets It To Generate Dangerous Info Despite OpenAI’s New Safety Tech
Within just 24 hours of the release of OpenAI’s GPT-5, Tenable Research successfully managed to jailbreak the model by getting it to share detailed instructions for how to build an explosive. Our finding is concerning, given that OpenAI described GPT-5's prompt safety technology as significantly…
The AI Security Dilemma: Navigating the High-Stakes World of Cloud AI
AI presents an incredible opportunity for organizations even as it expands the attack surface in new and complex ways. For security leaders, the goal isn't to stop AI adoption but to enable it securely.Artificial Intelligence is no longer on the horizon; it's here, and it's being built and deployed…
Introduzione a Tenable AI Exposure: non tirare più a indovinare, inizia a proteggere la superficie di attacco dell'IA
Ora disponibile in Tenable One, Tenable AI Exposure offre visibilità su come i tuoi team utilizzano le piattaforme di IA e sui punti in cui tale utilizzo potrebbe mettere a rischio i tuoi dati, gli utenti e le difese.
CVE-2025-54135, CVE-2025-54136: Frequently Asked Questions About Vulnerabilities in Cursor IDE (CurXecute and MCPoison)
Researchers have disclosed two vulnerabilities in Cursor, the popular AI-assisted code editor, that impact its handling of model context protocol (MCP) servers, which could be used to gain code execution on vulnerable systems.
The White House AI Action Plan: A Critical Opportunity to Secure the Future
AI without built-in cybersecurity remains a liability. The AI Action Plan presents a pivotal opportunity to get this right by emphasizing a secure-by-design approach.
Cybersecurity Snapshot: AI Security Trails AI Usage, Putting Data at Risk, IBM Warns, as OWASP Tackles Agentic AI App Security
Check out fresh insights on AI data security from IBM’s “Cost of a Data Breach Report 2025.” Plus, OWASP issues guide on securing Agentic AI apps. In addition, find out how to protect your org against the Scattered Spider cyber crime group. And get the latest on zero-trust microsegmentation;…