Cybersecurity statistics about llms
Showing 1-10 of 10 results
LLMs failed to secure code against cross-site scripting (CWE-80) in 86% of cases.
In 45% of all test cases, LLMs introduced vulnerabilities classified within the OWASP Top 10.
LLMs failed to secure code against log injection (CWE-117) in 88% of cases
Prompts specifying a need for security or requesting OWASP best practices produced more secure results, yet still yielded some code vulnerabilities for 5 out of the 7 LLMs tested.
When prompted to generate secure code, GPT-4o still produced insecure outputs vulnerable to 8 out of 10 issues.
In response to simple, “naive” prompts, all LLMs tested generated insecure code vulnerable to at least 4 of the 10 common CWEs.
With naive prompts, ChatGPT scored a 1.5/10 secure code result.
Claude 3.7 Sonnet scored 6/10 secure code result using naive prompts.
OpenAI’s GPT-4o had the lowest performance, scoring a 1/10 secure code result using "naive" prompts.
Claude 3.7 Sonnet scored 10/10 with security-focused prompts.