We've curated 11 cybersecurity statistics about AI Risks to help you understand how emerging threats like deepfakes and automated attacks are reshaping the landscape of cybersecurity in 2025.
Showing 1-14 of 14 results
60-70% of AI-generated code lacks deployment environment awareness, generating code that runs locally but fails in production.
40-50% of AI-generated code inflates coverage metrics with meaningless tests rather than validating logic.
80-90% of AI-generated code rigidly follows conventional rules, missing opportunities for more innovative, improved solutions.
80-90% of AI-generated code creates hyper-specific, single-use solutions instead of generalizable, reusable components.
80-90% of AI-generated code generates functional code for immediate prompts but never refactors or architecturally improves existing code.
70-80% of AI-generated code violates code reuse principles, causing identical bugs to recur throughout codebases, requiring redundant fixes.
40-50% of AI-generated code reimplements from scratch instead of using established libraries, SDKs, or proven solutions.
20-30% of AI-generated code over-engineers for improbable edge cases, causing performance degradation and resource waste.
90-100% of AI-generated code contains excessive inline commenting, which dramatically increases computational burden and makes code harder to check.
40-50% of AI-generated code defaults to tightly-coupled monolithic architectures, reversing decade-long progress toward microservices.
The percentage of companies globally that felt very prepared to manage AI risks has remained relatively flat over the past three years, with 9% in 2023, 8% in 2024, and 12% in 2025.
94% of all AI services are at risk for at least one of the top Large Language Model (LLM) risk vectors, including prompt injection/jailbreak, malware generation, toxicity, and bias.
30% of respondents report the emergence of a new attack surface due to the use of AI by their business users.
Approximately 1 in 4 organizations said they’re concerned about how AI use in the enterprise will make them more attackable (AI and generative AI concerns)