We've curated 12 cybersecurity statistics about Shadow AI to help you understand how unauthorized AI tools and applications are proliferating in organizations, posing unique risks and challenges in 2025.
Showing 1-20 of 25 results
75% of security practitioners believe that shadow AI will surpass the risks previously posed by shadow IT in 2025.
36% of German employees reported that they have worked on AI-based applications that their employers did not approve
43% of US employees reported that they have worked on AI-based applications that their employers did not approve
27% of employees have used AI-based applications that were not purchased or approved by their company.
16% of employees reported analyzing company data with the help of AI when using AI at work.
16% of employees reported working on performance reviews or hiring using employee data when using AI at work.
18% of organizations identify GenAI features embedded in SaaS applications as their second-highest Shadow AI concern.
Other Shadow AI vectors, including personal accounts, third-party APIs, plugins, and local applications, each fall below 12% of organizations' concerns.
23% of organizations acknowledge inadequate preparation to address unapproved AI tools and services.
16.0% of organizations expect Shadow AI management to require the most new investment in AI security over the next 12 months.
21% of organizations cite standalone GenAI tools (like ChatGPT, Claude, and image generators such as Midjourney) as their primary Shadow AI concern.
16% of organizations identify AI agents operating with user credentials as a Shadow AI concern.
49% of organizations anticipate Shadow AI incidents.
23% of organizations adopting AI identify Shadow AI and unapproved tools as an area where they are least prepared to address threats.
14% of organizations identify orchestration frameworks as a Shadow AI concern.
18% of companies are affected by "Shadow AI".
Over half of all current app adoption among enterprise users is estimated to be shadow AI.
Organisations that used high levels of shadow AI observed an average of $670,000 in higher breach costs.
Security incidents involving shadow AI led to more personally identifiable information (65%) being compromised compared to the global average (53%).
Security incidents involving shadow AI led to more intellectual property (40%) being compromised compared to the global average (33%).