How GenAI Is Changing Enterprise Security Risks
New research from cybersecurity firm Netskope reveals a dramatic 30-fold increase in enterprise data being sent to generative AI applications over the past year, raising significant security concerns for organizations worldwide.
According to Netskope's 2025 Generative AI Cloud and Threat Report, based on anonymized usage data collected by the Netskope One platform, employees are increasingly sharing sensitive information with AI tools, including source code, regulated data, passwords, keys, and intellectual property. This surge in data sharing substantially increases the risk of costly data breaches, compliance violations, and intellectual property theft.
The report highlights that "shadow AI" has become the predominant shadow IT challenge for organizations, with 72% of enterprise users accessing generative AI applications through personal accounts rather than company-managed solutions. Additionally, 75% of enterprise users are now accessing applications with embedded generative AI features.
"Despite earnest efforts by organizations to implement company-managed genAI tools, our research shows that shadow IT has turned into shadow AI," said James Robinson, CISO at Netskope. “This ongoing trend, when combined with the data in which it is being shared, underscores the need for advanced data security capabilities so that security and risk management teams can regain governance, visibility, and acceptable use over genAI usage within their organisations.”
The cybersecurity firm has identified 317 generative AI applications in enterprise use, including popular tools like ChatGPT, Google Gemini, and GitHub Copilot. The report also notes a significant shift toward local hosting of AI infrastructure, with the percentage of organizations running generative AI locally increasing from less than 1% to 54% over the past year.
Despite this move to local infrastructure, which can reduce the risk of unwanted data exposure to third-party cloud applications, Netskope warns that this transition introduces new data security risks, including supply chain vulnerabilities, data leakage, and prompt injection attacks.
To address these emerging threats, Netskope recommends that enterprises assess their generative AI landscape, strengthen controls on AI applications, and implement comprehensive security measures aligned with frameworks such as the OWASP Top 10 for Large Language Model Applications and NIST's AI Risk Management Framework.
Read the full report HERE