We've curated 4 cybersecurity statistics about Sensitive data to help you understand how protecting personal information, financial records, and health data is evolving in 2025. Discover the latest trends in encryption, data breaches, and compliance challenges!
Showing 1-20 of 48 results
25% of all sensitive data disclosures involve technical data, with 65% of that consisting of proprietary source code copied into generative AI tools.
57% of sensitive data uploaded to generative AI tools is classified as business or legal data, with 35% of that involving contract or policy drafting.
15% of all sensitive data uploaded to generative AI tools involves personal or employee data, including identifiers such as names and addresses.
12% of all sensitive data exposures originate from personal accounts, including free versions of generative AI tools.
26.4% of all file uploads to generative AI tools contained sensitive data between July and September 2025, an increase from 22% in Q2 2025.
Only 15% of organizations feel fully prepared to handle the movement of sensitive data through SaaS and Shadow IT tools.
72% of organizations lack visibility into how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.
49% of organizations agree, and 23% strongly agree, that they lack visibility into how users interact with sensitive data across endpoints, cloud apps, and GenAI platforms.
96% of healthcare organizations researched had at least two data loss or exfiltration incidents involving sensitive and confidential healthcare data in the past two years.
31% of respondents say sensitive data requests tops their lists of app concerns.
91% of organizations believe that sensitive data should be allowed in AI training.
Sensitive data exposure: 10.5% in the financial services industry (versus 8.0% average in other industries).
The average small healthcare employee has access to more than 5,500 sensitive files.
28% of the workforce have admitted to using AI to access sensitive data.
57% of employees input sensitive data into free-tier AI tools.
Of analyzed prompts and files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June, 22% of files (totaling 4,400 files) and 4.37% of prompts (totaling 43,700 prompts) were found to contain sensitive information.
Code leakage was the most common type of sensitive data sent to GenAI tools.
The average enterprise uploaded 1.32GB of files (half of which were PDFs) to GenAI tools and AI-enabled SaaS applications in Q2. A full 21.86% of these files contained sensitive data.
535 separate incidents of sensitive exposure were recorded involving Chinese GenAI tools.
Of these incidents involving Chinese GenAI tools, the exposed data types included: 32.8% involving source code, access credentials, or proprietary algorithms; 18.2% including M&A documents and investment models; 17.8% exposing PII such as customer or employee records; and 14.4% containing internal financial data.