VendorsHarmonic Security

Harmonic Security

Cybersecurity reports and statistics published by Harmonic Security

8 categories3 reports

Recent Statistics & Reports

26.4% of all file uploads to generative AI tools contained sensitive data between July and September 2025, an increase from 22% in Q2 2025.

57% of sensitive data uploaded to generative AI tools is classified as business or legal data, with 35% of that involving contract or policy drafting.

15% of all sensitive data uploaded to generative AI tools involves personal or employee data, including identifiers such as names and addresses.

The average organization used 27 distinct AI tools in Q3 2025, down from 23 new tools introduced in Q2 2025.

The average enterprise uploaded more than three times as much data to generative AI platforms in Q3 2025, with 4.4GB compared to 1.32GB in Q2 2025.

25% of all sensitive data disclosures involve technical data, with 65% of that consisting of proprietary source code copied into generative AI tools.

12% of all sensitive data exposures originate from personal accounts, including free versions of generative AI tools.

26.3% of ChatGPT use by employees was via personal accounts.

15% of Google Gemini use by employees was via personal accounts.

13.7% of all sensitive prompts analysed in Q2 originated in Microsoft Copilot.

72.6% of all sensitive prompts analysed in Q2 originated in ChatGPT.

1.8% of all sensitive prompts analysed in Q2 originated in Perplexity.

Of these incidents involving Chinese GenAI tools, the exposed data types included: 32.8% involving source code, access credentials, or proprietary algorithms; 18.2% including M&A documents and investment models; 17.8% exposing PII such as customer or employee records; and 14.4% containing internal financial data.

Of analyzed prompts and files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June, 22% of files (totaling 4,400 files) and 4.37% of prompts (totaling 43,700 prompts) were found to contain sensitive information.

The average enterprise uploaded 1.32GB of files (half of which were PDFs) to GenAI tools and AI-enabled SaaS applications in Q2. A full 21.86% of these files contained sensitive data.

Code leakage was the most common type of sensitive data sent to GenAI tools.

7.95% of employees in the average enterprise used a Chinese GenAI tool.

535 separate incidents of sensitive exposure were recorded involving Chinese GenAI tools.

Sensitive data in files sent to GenAI tools showed a disproportionate concentration of sensitive and strategic content compared to prompt data, with files being the source of 79.7% of all stored credit card exposures, 75.3% of customer profile leaks, 68.8% of employee PII incidents, and ◦ 52.6% of total exposure volume in financial projections.

47.42% of sensitive employee uploads to Perplexity were from users with standard (non-enterprise) accounts.

Showing first 20 results