WEF Report Maps AI's Role in Cybersecurity

Organisations that deploy AI extensively in security operations have reduced the average time to contain a data breach by approximately 80 days and cut average breach costs by $US1.9 million, according to a new white paper published by the World Economic Forum in collaboration with KPMG.

The findings frame a broader argument that AI has moved from a peripheral innovation to a strategic necessity across the full cybersecurity lifecycle - while also carrying new governance risks that existing security controls are not designed to manage.

The white paper, Empowering Defenders: AI for Cybersecurity, was produced by the WEF's Cyber Frontiers initiative and draws on case studies from WEF partner organisations, as well as workshops convening 105 representatives from 84 organisations across 15 industries.

The paper reports that 77 per cent of organisations are now using AI in cybersecurity, citing the WEF's own Global Cybersecurity Outlook 2026. Adoption is closely tied to organisational size and resources - larger enterprises with greater technical maturity report higher rates, while smaller entities, governments and non-governmental organisations tend to lag due to financial constraints, skills availability and data maturity.

AI identifies as the most significant driver of change in cybersecurity, according to 94 per cent of survey respondents to the Global Cybersecurity Outlook 2026.

Current AI applications are most concentrated in threat detection. Fifty-two per cent of organisations use AI for phishing detection, 46 per cent for intrusion and anomaly detection, and 40 per cent for user behaviour analytics, according to the Global Cybersecurity Outlook 2026 cited in the paper. Eighty-eight per cent of security teams report time savings and greater opportunity for proactive defence as a result of AI deployment.

The paper identifies five structural factors driving AI adoption as a security necessity. Attackers need only find one entry point while defenders must protect everything - and AI gives attackers new speed in identifying vulnerabilities.

Manual analysis can no longer cope with the scale and interconnectedness of modern digital environments. Security teams are overwhelmed: 76 per cent of professionals reported exhaustion in 2025, according to Sophos research cited in the paper.

Enterprise Applications: Three Case Studies

AXIS Capital - AI-Driven Security Architecture and Engineering

AXIS Capital's security architecture and engineering team sought to embed security intelligence directly into application and cloud design, while reducing the time engineers spent managing backlogs of security alerts. Traditional manual review processes could not scale to meet the demands of complex cloud environments.

The team applied AI to analyse code and cloud configurations using static application security testing and cloud security posture management capabilities. Uninterrupted scanning within continuous integration and continuous deployment pipelines identifies and addresses vulnerabilities before code reaches production.

AI prioritises risks based on exploitability and business impact. AI-generated guidance and remediation recommendations are delivered within existing developer workflows, reducing friction. Separately, AI correlates data across cloud environments to surface architectural weaknesses and misconfigurations that are difficult to detect through manual review.

The outcome, according to AXIS Capital, was a reduction in time spent managing alerts and backlogs, accelerated response times, lower remediation costs and maintained engineering productivity - allowing security measures to be scaled without affecting the speed of business development. The approach reflects a shift toward security embedded in the development lifecycle, rather than applied as a separate layer after the fact.

ING Group - AI-Powered Data Leakage Prevention

ING, a global financial institution protecting more than 60,000 employees globally, faced the challenge of processing and prioritising data leakage prevention (DLP) alerts across multilingual email attachments, web uploads and metadata at scale. The volume and variety of alerts made manual triage impractical.

ING developed a machine learning solution built on top of its existing DLP tooling. An AI model categorises email attachments and combines with a classification model to prioritise and identify potential leaks.

A separate "browser uploads" production pipeline provides security operations centre (SOC) analysts with near-real-time insights via dashboards, enabling rapid handling of incidents as they emerge. The system is designed as a reproducible workflow that ING says can be extended to other organisations.

To date, the solution has processed five million alerts and delivered a 20 per cent increase in analyst precision, according to ING. An internal SOC survey found analyst job satisfaction increased significantly with the AI-based workflow - a notable outcome given the documented burnout problem in cybersecurity operations.

For financial institutions operating under strict data governance obligations, the ING model demonstrates how AI can be applied to compliance-adjacent workflows without displacing existing security infrastructure.

Standard Chartered - AI Hyper-Automation for SOC and Case Management

Standard Chartered's SOC and response teams faced increasing pressure from growing alert volumes, complex investigations and rising expectations for speed, consistency and auditability in a global banking environment.

The bank implemented an AI hyper-automation strategy embedded into SOC and case management workflows under a strict human-in-the-loop governance model. The solution applies machine learning and large language models (LLMs) to dynamically score risk, prioritise alerts and cases, and enrich detections with contextual intelligence.

AI-driven triage automatically classifies events and averts duplication before cases are created. Generative AI supports analysts by producing concise case summaries and drafting incident communications.

An in-console AI co-pilot provides realtime guidance, similar-case recommendations and next-best actions. The system was deployed incrementally with guardrails, full observability and kill-switch controls - retaining the ability to disable AI functions entirely if required.

The outcome was a 25 to 35 per cent reduction in manual triage effort and a 20 to 30 per cent improvement in time-to-triage, according to the bank. Low-risk, repetitive cases are now auto-resolved within defined thresholds, freeing analysts for higher-complexity investigations.

The Standard Chartered model is notable for its explicit governance architecture - each layer of automation operates within defined boundaries, with human accountability preserved and override capability maintained throughout.

Agentic AI: Opportunity and New Governance Demands

The paper dedicates a section to agentic AI - systems capable of autonomous planning, decision-making and execution of cybersecurity tasks.

The paper describes a four-level spectrum of AI autonomy in cybersecurity: "assist" (AI processes and organises data, humans conceive the response); "recommend" (AI flags issues and waits for human approval to act); "execute overridable" (AI autonomously takes reversible actions that humans can override in real time); and "execute independent" (AI acts without real-time human involvement, with oversight delegated to supervisor AI agents or post-hoc audits).

The appropriate level, the paper argues, should be determined by the risk and reversibility of the action - high autonomy for low-stakes, reversible decisions; mandatory human oversight for high-stakes actions with lasting consequences.

The paper identifies three core risks specific to agentic AI. First, an expanded attack surface: AI agents introduce new entry points that can be exploited or hijacked. Second, unintended agent behaviours driven by hallucinations, external manipulation or misconfigured objectives can cascade across multi-agent environments at machine speed.

Third, governance gaps arise where agents are deployed rapidly without proper approval or validation, with no clear accountability for outcomes. The paper notes that these risks cannot be addressed through traditional security controls alone.

The paper includes an explicit caution against over-reliance. "Heavy reliance on AI can undermine cyber resilience," it states. "Excessive trust in automated decisions creates a false sense of security and over time erodes the expertise needed to intervene when systems fail."

Security teams are advised to combine AI with human judgement, simulate AI failures and design fail-safes that keep security operations functional during AI outages. The paper also flags a skills atrophy risk: as AI automates routine tasks, security professionals have fewer opportunities for hands-on practice, potentially weakening organisational resilience when automation falls short.

The full report is available here.