Gartner® nomina Tenable l’attuale "Company to Beat" (azienda da battere) per la valutazione dell’esposizione basata sull’IA in un report del 2025.
"Grazie alla copertura degli asset e della superficie di attacco, all'applicazione dell'IA e alla sua reputazione per la valutazione delle vulnerabilità, Tenable è l'azienda leader nella valutazione dell'esposizione basata sull'IA", scrive Gartner nel report "AI Vendor Race: Tenable Is the Company to Beat for AI-Powered Exposure Assessment".
What Anthropic’s Latest Model Reveals About the Future of Cybersecurity
AI can find vulnerabilities with unprecedented speed, but discovery alone doesn’t reduce cyber risk. We need exposure prioritization, contextual risk analysis, and AI-driven remediation to transform findings into security outcomes.
From Clawdbot to Moltbot to OpenClaw: Security Experts Detail Critical Vulnerabilities and 6 Immediate Hardening Steps for the Viral AI Agent
Moltbot, the viral AI agent, is riddled with critical vulnerabilities, exposed control interfaces, and malicious extensions that put users' sensitive data at risk. Understand the immediate security practices you can implement to mitigate this enormous agentic AI security risk.
Lancio di Tenable One AI Exposure: un nuovo standard per proteggere l'uso dell'intelligenza artificiale su larga scala
Scopri e monitora costantemente tutto l'utilizzo dell'IA nell'organizzazione, tra cui la Shadow AI, gli agenti, i plug-in dei browser e altro ancora, con Tenable One AI Exposure. Mappa i flussi di lavoro complessi dell'IA per rivelare le esposizioni ad alto impatto e monitora la conformità alle politiche di sicurezza e di utilizzo accettabile dell'IA.
Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed
Your employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage.
Microsoft Copilot Studio Security Risk: How Simple Prompt Injection Leaked Credit Cards and Booked a $0 Trip
The no-code power of Microsoft Copilot Studio introduces a new attack surface. Tenable AI Research demonstrates how a simple prompt injection attack of an AI agent bypasses security controls, leading to data leakage and financial fraud. We provide five best practices to secure your AI agents.
Detecting AI Security Risks Requires Specialized Tools: Time to Move Beyond DLP and CASB
Learn why your existing security tech won’t detect data exposure, prompt injection and manipulation, and other AI security risks from ChatGPT Enterprise, Microsoft 365 Copilot, and other LLMs.
Agentic AI Security: Keep Your Cyber Hygiene Failures from Becoming a Global Breach
The Claude Code weaponization reveals the true threat: The democratization and orchestration of existing attack capabilities. It proves that neglecting fundamental cyber hygiene allows malicious AI to execute massive-scale attacks with unprecedented speed and low skill.
A Practical Defense Against AI-led Attacks
The era of AI-driven cyberattacks is here, demonstrated by the recent abuse of an agentic AI tool in a broad espionage campaign. Defense requires a new approach centered on preemptive exposure management, combining reinforced security fundamentals with defining the new AI attack surface and…
How Rapid AI Adoption Is Creating an Exposure Gap
As organizations rush to deploy AI, enterprise defenses are struggling to keep up. This blog explores the emerging AI exposure gap — the widening divide between innovation and protection — and what security leaders can do to close it.
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms.
Cybersecurity Snapshot: Top Guidance for Improving AI Risk Management, Governance and Readiness
Many organizations are playing catch-up in key AI security policy areas, such as usage governance, risk oversight, data protection, and staff training. In this Cybersecurity Snapshot special edition, we round up recent guidance on preparing for, managing and governing AI cyber risks.