HomeTopicsAi Risk

Ai Risk

Cybersecurity statistics about ai risk

Showing 1-20 of 20 results

80-90% of AI-generated code creates hyper-specific, single-use solutions instead of generalizable, reusable components.

80-90% of AI-generated code generates functional code for immediate prompts but never refactors or architecturally improves existing code.

70-80% of AI-generated code violates code reuse principles, causing identical bugs to recur throughout codebases, requiring redundant fixes.

40-50% of AI-generated code reimplements from scratch instead of using established libraries, SDKs, or proven solutions.

20-30% of AI-generated code over-engineers for improbable edge cases, causing performance degradation and resource waste.

90-100% of AI-generated code contains excessive inline commenting, which dramatically increases computational burden and makes code harder to check.

40-50% of AI-generated code inflates coverage metrics with meaningless tests rather than validating logic.

80-90% of AI-generated code rigidly follows conventional rules, missing opportunities for more innovative, improved solutions.

60-70% of AI-generated code lacks deployment environment awareness, generating code that runs locally but fails in production.

40-50% of AI-generated code defaults to tightly-coupled monolithic architectures, reversing decade-long progress toward microservices.

The percentage of companies globally that felt very prepared to manage AI risks has remained relatively flat over the past three years, with 9% in 2023, 8% in 2024, and 12% in 2025.

94% of all AI services are at risk for at least one of the top Large Language Model (LLM) risk vectors, including prompt injection/jailbreak, malware generation, toxicity, and bias.

39.5% of AI tools have the key risk factor of inadvertent exposure of user interactions and training data.

34.4% of AI tools have user data accessible to third parties without adequate controls.

83.8% of enterprise data input into AI tools flows to platforms classified as medium, high, or critical risk.

Only 11% of AI tools assessed qualify for low or very low risk classifications.

Cyberhaven's assessment of over 700 AI tools found that a troubling 71.7% fall into high or critical risk categories.

30% of respondents report the emergence of a new attack surface due to the use of AI by their business users.

Only 16.2% of enterprise data input into AI tools is destined for enterprise-ready, low-risk alternatives.

Approximately 1 in 4 organizations said they’re concerned about how AI use in the enterprise will make them more attackable (AI and generative AI concerns)