HomeTopicsAI security

AI security

We've curated 5 cybersecurity statistics about AI security to help you understand how AI is being used to detect threats, enhance defenses, and even automate responses in the ever-evolving landscape of cybersecurity in 2025.

Showing 1-18 of 18 results

74% of organizations expect their focus on AI security to increase significantly over the next two years.

Deskpro11/16/2025

100% of organizations plan to invest more of their budget in AI-related security initiatives in the next 12 months.

83% of healthcare IT and compliance leaders have raised concerns about AI security.

69% of healthcare IT leaders feel pressured to adopt AI faster than they can secure it.

78% of U.S. CISOs expect AI to create a moderate or significant amount of new IT or security work for their teams due to AI-related security risks and vulnerabilities.

78% of CISOs lack a formal strategy for handling AI identities in a zero trust security architecture in 2025.

AI security incidents have doubled since 2024

Agentic AI caused the most dangerous failures—crypto thefts, API abuses, and legal disasters, and Supply chain attacks.

35% of all real-world AI security incidents were caused by simple prompts.

Some prompt injection incidents led to over $100,000 in real losses without requiring any code to be written.

Generative AI (GenAI) was involved in 70% of real-world AI security incidents.

Only 23% of organizations surveyed have implemented comprehensive AI security policies.

77% of organizations have the overprivileged default Compute Engine service account configured in Google Vertex AI Notebooks

14% of organizations using Amazon Bedrock do not explicitly block public access to at least one AI training bucket

Approximately 70% of cloud AI workloads contain at least one unremediated vulnerability

Tenable Research found CVE-2023-38545—a critical curl vulnerability—in 30% of cloud AI workloads

5% of organizations using Amazon Bedrock have at least one overly permissive bucket

91% of Amazon SageMaker users have at least one notebook that, if compromised, could grant unauthorized access