Security Gaps Plague AI Governance: US Survey
While organisations recognise the importance of artificial intelligence governance, most lack the processes to implement it effectively, creating significant security vulnerabilities and compliance challenges, according to a survey by Anaconda Inc.
Based on responses from more than 300 AI, IT, DevOps and data governance professionals in the US, it found that fragmented tooling and misaligned governance priorities are slowing AI adoption while introducing substantial operational risks.
Two-thirds of respondents experience deployment delays due to security concerns, with more than 40 per cent of teams spending a quarter of their AI development time troubleshooting dependency conflicts or security issues. Despite 82 per cent of organisations validating Python packages for security, nearly 40 per cent still frequently encounter vulnerabilities.
The findings highlight critical blind spots in post-deployment monitoring, with 30 per cent of teams lacking any formal drift detection and only 62 per cent tracking models using comprehensive documentation. Only 26 per cent of organisations maintain a highly unified AI development toolchain, leading to inconsistent security controls and significant visibility gaps.
The report found that 57 per cent of respondents cited regulatory and privacy concerns as major obstacles.
The governance challenges extend to AI-assisted coding, with only 34 per cent of organisations maintaining formal policies for generative AI use in software development. Most are either adapting outdated frameworks (25 per cent) or developing new approaches (21 per cent).
“Organizations are grappling with foundational AI governance challenges against a backdrop of accelerated investment and rising expectations,” said Greg Jennings, VP of Engineering at Anaconda.
“By centralizing package management and defining clear policies for how code is sourced, reviewed, and approved, organizations can strengthen governance without slowing AI adoption. These steps help create a more predictable, well-managed development environment, where innovation and oversight work in tandem.”
Forrester research predicts spending on AI governance software will quadruple to $US15.8 billion by 2030, reflecting growing urgency to secure AI supply chains without stifling innovation.
To address these challenges, organisations identified needs for better-integrated development and security workflows (29 per cent), improved visibility into model components (23 per cent) and enhanced team training (19 per cent).