AI Agents to Transform Enterprise Apps

Enterprise applications will undergo a dramatic transformation with 40 per cent featuring task-specific AI agents by 2026, up from less than five per cent currently, according to new research from analyst firm Gartner.

The research company forecasts agentic AI will drive approximately 30 per cent of enterprise application software revenue by 2035, surpassing $US450 billion globally, compared to just two per cent in 2025.

CIOs have a narrow three to six-month window to establish their agentic AI strategies, Gartner warns, as organisations risk falling significantly behind competitors who embrace the technology early.

"AI agents are evolving rapidly, progressing from basic assistants embedded in enterprise applications today to task-specific agents by 2026 and ultimately multiagent ecosystems by 2029," said Anushree Verma, Senior Director Analyst at Gartner.

Gartner outlines a five-stage evolution beginning with AI assistants embedded in almost every enterprise application by end-2025, progressing through task-specific agents, collaborative AI systems, cross-application ecosystems, and culminating in democratised agent creation by 2029.

By the final stage, at least 50 per cent of knowledge workers will develop skills to create and govern AI agents for complex tasks, the research suggests.

The distinction between AI "assistants" and true "agents" remains unclear in Gartner's framework, particularly given the company's own acknowledgment of widespread "agentwashing" in the industry where basic AI tools are marketed as sophisticated agents.

For compliance and risk management professionals, the shift toward autonomous AI agents presents significant governance challenges. Gartner acknowledges the need for "strong security and governance" as agents begin operating independently, but provides limited detail on regulatory frameworks or audit trails.

The transition from application-centric to agent-mediated workflows could complicate data lineage tracking and compliance reporting - core concerns for records managers and governance professionals in heavily regulated sectors.

The predictions emerge as organisations globally grapple with AI implementation challenges, including skills shortages, integration complexity, and uncertain return on investment calculations.

European cybersecurity authorities have issued new guidelines warning organisations against deploying fully autonomous artificial intelligence systems without human oversight, citing significant security risks that current technology cannot adequately address.

The German Federal Office for Information Security and France's cybersecurity agency released joint design principles in August 2025 for securing Large Language Model systems using Zero Trust architecture. The 16-page framework targets the growing deployment of "agentic" AI systems that can operate independently across business processes.

"Blind trust in LLM systems is not advisable, and the fully autonomous operation of such systems without human oversight is not recommended," the agencies stated. "It is improbable that such agents can ensure meaningful and reliable safety guarantees."

The guidelines specifically address vulnerabilities in Retrieval-Augmented Generation systems, where AI models access external databases, and warn against automatic execution of AI-generated system commands. "The user must be able to approve all system inputs of the application and actions of the agent," the agencies recommend.

Risk scenarios include data exfiltration through manipulated links, privilege escalation attacks, and supply chain compromises targeting AI system components. Critical mitigations include implementing least privilege access controls, comprehensive session isolation, and human-in-the-loop approval for sensitive operations.

The full report "Design Principles for LLM-based Systems with Zero Trust" is available for download here.