Cybersecurity statistics about ai code
Showing 1-6 of 6 results
LLMs failed to secure code against cross-site scripting (CWE-80) in 86% of cases.
AI-generated code introduces security vulnerabilities in 45% of cases.
When given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45% of the time.
In 45% of all test cases, LLMs introduced vulnerabilities classified within the OWASP Top 10.
Java was found to be the riskiest language for AI code generation, with a security failure rate over 70%. Other major languages, such as Python, C#, and JavaScript, presented significant risk, with failure rates between 38 percent and 45 percent.
LLMs failed to secure code against log injection (CWE-117) in 88% of cases