Australian organisations addressing Shadow AI, but new risks are already emerging
Shadow IT has always been a headache for security teams. Successive shifts in enterprise tech have always created blind spots and hidden behaviours that need to be discovered and secured. With the advent of SaaS apps, employees suddenly had a myriad of tools available to better collaborate and work. They quickly adopted them, often unaware of security approval processes, creating SaaS sprawl, shadow cloud, and significant changes in digital estates that some organisations are still grappling with.
The emergence of generative AI has brought similar challenges, with Shadow AI permeating most organisations. In 2024, a staggering 80% of Australian workers were using personal generative AI accounts at work. This is according to a report published by the Netskope Threat Labs team, which has been documenting the exposure of sensitive data through genAI usage in the workplace since 2023. Their ongoing research, based on actual usage patterns from global and Australian businesses, shows that leaks of intellectual property, regulated data, or source code in genAI prompts are consistent across all geographies and industries, illustrating the data security risks of unmonitored and unsecured genAI usage.
But in a local cyber agenda dominated by pessimistic reports and cyber incidents, their most recent analysis delivers a glimmer of hope. In less than a year, the proportion of Australians using personal genAI accounts at work dropped sharply, from 80% to 55%. This could be interpreted as a reduction in genAI use in the workplace, but this drop is a direct result of efforts by Australian organisations to centralise, gain visibility on, and secure genAI by deploying company-approved applications. It’s a heartening finding that shows employees will adopt more secure behaviours when offered safe and easy-to-use alternatives.
Another sign of the growing maturity of AI security is Australian organisations’ increased adoption of data loss prevention (DLP) tools, which rose to 41% from 32% last year. DLP can inspect prompts and data in real time, and automatically block the transfer of sensitive information to unauthorised locations or contacts, acting as a safety net to human error. Real-time user coaching tools can also support employees, presenting them with pop-ups when they attempt a risky action, and suggesting alternative, more secure options, or asking them to pause and justify or reconsider their action. Research shows that a large majority (73% globally) choose not to proceed with their risky behaviour when presented with these coaching prompts.
LLMs, however, are only the tip of the AI iceberg, and AI-related cyber risk is becoming more multi-faceted as enterprise AI usage spreads and deployment models diversify. According to the Netskope Threat Labs researchers, 23% of Australian organisations are using on-premises LLM interfaces, and globally, 5.5% of organisations already have users running AI agents created using popular AI agent frameworks on-premises.
These platforms, which allow users to design and deploy AI models and agents within their organisation, are only going to grow in popularity in the upcoming months. The appeal is the retention of data ownership, but these on-premises models often have very few inherent security features, and require proper configuration by security teams before they can be considered safe. Custom AI deployments, in most cases, also make use of open source resources, which can create AI supply chain security issues, some of which have already been reported. Finally, most AI models or agents require direct access to enterprise data sources to train or complete their tasks, and without restricting their access levels and monitoring their operations, these systems could be over-permissioned and easily expose sensitive data.
Given the risks, security teams should make discovering AI deployments and eliminating any shadow AI within their organisation a priority. To achieve this, proactively communicating security protocols and principles for responsible AI use and development is an essential initial step. From a security tools perspective, while DLP and real-time user coaching can help to an extent, they won’t cover the multi-faceted risk surface created with AI projects, which can only be addressed with a comprehensive framework of security solutions.
With many security teams already managing large and fragmented ecosystems of security tools, the addition of yet more point security solutions should be avoided. Poorly integrated point solutions bring functionality overlap, gaps and duplicated management and operations burden.
Security platforms integrate multiple security tools into the same fabric and ensure they truly communicate and work in unison, delivering a more comprehensive and unified security approach. Relevant platforms also enable security teams to manage and configure all their security tools from a single dashboard, and provide a unified and complete view of their data, network, traffic, and users across web, cloud, AI and private data centre environments. Once AI initiatives are discovered, security teams can focus on ensuring all aspects of the project are secured, from the supply chain, to the stakeholders and data involved.
AI projects are only going to grow in complexity, and so will the challenge of securing the huge amounts of data they are going to interact with. Australian organisations have been making great progress in tackling genAI risk, but need to keep their sights on the horizon, as new AI risks will continue to come quickly, and will require agility and speed in applying the appropriate security guardrails to prevent major incidents.
Tony Burnside is SVP and Head of APAC, Netskope.