APRA Threatens Action Over AI Governance Failures

APRA has told banks, insurers and superannuation trustees that governance, risk management and operational resilience practices are failing to keep pace with the speed and complexity of AI adoption - and has threatened enforcement action against entities that do not address the gaps.

The Australian Prudential Regulation Authority (APRA) issued the warning in a letter to all regulated entities on 30 April 2026, publishing findings from a targeted supervisory review of large financial institutions conducted in late 2025.

APRA Member Therese McCarthy Hockey said the regulator had observed a fundamental disconnect between AI deployment and governance capability.

"What we've observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren't keeping up. Likewise, the speed at which entities can identify and patch vulnerabilities needs to operate much faster, commensurate with the AI-accelerated threat.

“APRA is also engaging across the sector on the potential for increased cyber threats from high capability AI frontier models such as Anthropic Mythos.”

APRA found that while all reviewed entities were actively adopting AI, few had operationalised governance in practice. The letter states that most entities "recognise that existing prudential standards apply to AI risk" but have tended to treat it as "just another technology" - missing key differences such as adaptive model behaviour, bias and data privacy risks specific to predictive systems.

“Many entities are already trialling or introducing AI capabilities in areas such as software engineering, claims triage, loan application processing, fraud and scam disruption, customer interaction and insight generation. However, governance has not matured at the same pace.”

Boards were identified as a particular weakness. APRA observed strong interest in AI's commercial upside but said "many Boards are still developing the technical literacy required to provide effective challenge on AI related risks and oversight." APRA also noted an over-reliance on vendor presentations without sufficient examination of model behaviour risks and implications for critical operations.

Cyber Threats Expanding Faster Than Defences

APRA found that AI adoption is materially changing the cyber threat landscape, creating new attack pathways and enabling faster, more coordinated attacks. Specific attack vectors identified by APRA include prompt injection, data leakage, insecure integrations, exploit injection and the manipulation or misuse of autonomous AI agents.

The letter states that AI "can shorten the attack cycle and increase speed, coordination and impact." APRA is separately engaging across the sector on the threat posed by high-capability frontier AI models, noting current Australian Signals Directorate (ASD) advice on frontier models.

APRA identified critical gaps in identity and access management - noting that "capabilities have not yet adjusted to non-human actors such as AI agents." The volume and speed of AI-assisted software development is also straining existing change and release management controls.

Shadow AI use inside financial institutions was flagged as a direct governance concern. APRA observed that entities were relying primarily on "policy direction or detective, after-the-fact measures, rather than enforceable technical restrictions or robust preventative controls" to manage staff use of unapproved AI tools.

Supplier Concentration and Assurance Gaps

APRA found some entities heavily dependent on a single vendor for multiple AI use cases. Few had demonstrated "robust contingency planning or tested exit and substitution strategies for critical AI providers." Contractual arrangements frequently lacked provisions for audit rights, model update notifications and incident reporting.

The letter also highlighted opacity in AI supply chains. When AI capabilities are embedded within broader software platforms or developer tooling, "upstream dependencies such as foundation models, training data sources and fourth party service providers are opaque" - limiting entities' ability to assess model performance, bias, resilience and security independently.

On assurance, APRA found that existing internal audit and risk functions lack the specialist skills and tools required to assess AI systems - particularly where agentic behaviour, automated decision-making or AI-assisted code generation are involved. APRA observed that entities were relying on "point in time and sample-based assurance methods" despite these being "ill-suited to probabilistic models that learn, adapt and degrade over time."

The full APRA letter and executive management attachment are available at apra.gov.au