Cybersecurity statistics about sensitive information
Showing 1-17 of 17 results
Nearly half of employees (46%) admit to pasting company information into public AI tools.
Nearly half of employees are entering company-related information into public AI tools to complete tasks and concealing their AI use.
95% of publicly disclosed ransomware attacks involved the theft of sensitive information.
Individual breached records surged by more than 186%, revealing sensitive information such as passwords, emails, and credit card details.
There was a 43% increase in data breach data shared on underground forums in 2024.
11% of files uploaded to AI applications include sensitive corporate content.
20% of CISOs are able to identify over 75% of sensitive data across environments.
39% of Security Engineers report the highest ability to track sensitive data, able to identify over 75% of sensitive data across environments.
79% of security teams struggle to classify sensitive data used in AI/ML systems.
Less than half (48%) of organisations express high confidence in controlling sensitive data used for AI/ML training.
5.64% of sensitive data input into GenAI tools was sensitive code, like Access Keys and proprietary source code.
63.8% of ChatGPT users used the free tier, with 53.5% of sensitive prompts entered into it.
8.5% GenAI prompts contain sensitive information.
45.77% of sensitive data input into GenAI tools was customer data, such as billing information, customer reports, and customer authentication data.
26.83% of sensitive data input into GenAI tools was employee data, including payroll data, PII, and employment records.
14.88% of sensitive data input into GenAI tools was legal and finance data, such as information on Sales Pipeline Data, Investment Portfolio Data, and Mergers and Acquisitions.
6.88% of sensitive data input into GenAI tools was security policies and reports.