Cybersecurity statistics about gen ai
Showing 1-20 of 115 results
The average organization used 27 distinct AI tools in Q3 2025, down from 23 new tools introduced in Q2 2025.
12% of all sensitive data exposures originate from personal accounts, including free versions of generative AI tools.
25% of all sensitive data disclosures involve technical data, with 65% of that consisting of proprietary source code copied into generative AI tools.
The average enterprise uploaded more than three times as much data to generative AI platforms in Q3 2025, with 4.4GB compared to 1.32GB in Q2 2025.
26.4% of all file uploads to generative AI tools contained sensitive data between July and September 2025, an increase from 22% in Q2 2025.
57% of sensitive data uploaded to generative AI tools is classified as business or legal data, with 35% of that involving contract or policy drafting.
15% of all sensitive data uploaded to generative AI tools involves personal or employee data, including identifiers such as names and addresses.
63% of retailers plan to invest significantly in generative AI for social engineering attacks.
64% of organizations identified data compromise through generative AI as their top mobile risk.
Organizations are likely to make significant investments in generative AI to defend against social engineering attacks (31%).
70% of students are early adopters of generative AI and use it to create or modify images.
67% of organisations are implementing usage guidelines for GenAI.
64% of global CISOs say enabling GenAI tool use is a strategic priority over the next two years.
More than half (59%) of organisations restrict employee use of GenAI tools altogether.
In the U.S., 80% of CISOs express concern over potential customer data loss via public GenAI platforms.
Three in five CISOs (60%) worry about customer data loss via public GenAI tools.
Of these incidents involving Chinese GenAI tools, the exposed data types included: 32.8% involving source code, access credentials, or proprietary algorithms; 18.2% including M&A documents and investment models; 17.8% exposing PII such as customer or employee records; and 14.4% containing internal financial data.
1.8% of all sensitive prompts analysed in Q2 originated in Perplexity.
72.6% of all sensitive prompts analysed in Q2 originated in ChatGPT.
13.7% of all sensitive prompts analysed in Q2 originated in Microsoft Copilot.