AI adoption is spreading across banking, but critics say systems to match it are lacking. [Photo: Shutterstock]

Australia's prudential regulator APRA warned that governance and security systems to control artificial intelligence are failing to keep pace as banks and other financial firms rapidly expand adoption.

On April 30 local time, fintech outlet Finextra reported that APRA said in a letter to the industry that governance, risk management, assurance and operational resilience frameworks are inadequate compared with the growing scale, speed and complexity of AI use.

The letter followed a targeted supervisory review APRA conducted late last year. After examining how financial firms adopt and control AI, APRA concluded that the spread of advanced AI is creating new financial and operational vulnerabilities. It said information security frameworks are not keeping up with the pace of change, and that the speed of identifying and fixing vulnerabilities must also increase to match the level of threats accelerated by AI.

The regulator also said frontline AI models could heighten the risk of cyber attacks. APRA said models such as Anthropic's Claude can increase malicious actors' ability to search for vulnerabilities, and it expected cyber attacks to become more likely and grow in speed and scale.

APRA also raised concerns about a lack of preparation by bank boards. It said many boards showed strong interest in AI's potential benefits, but often lacked the technical understanding needed to meaningfully question management about AI risks and oversight. Therese McCarthy Hockey (테레즈 매카시 하키), an APRA member, said, "We cannot ignore the risks of such powerful technology," adding the risks apply not only to internal use but also to external actors with malicious intent.

Vulnerabilities were also found in supply chains and operations. Some financial firms were overly dependent on a single provider across multiple AI use cases, and there were gaps in contingency plans. APRA also pointed to reduced transparency as AI functions become embedded within broader software platforms or development tools, making it harder to see where and how models are trained and updated and what constraints they face. It said this creates limits on a firm's ability to fully assess and manage risks.

APRA said existing management and assurance approaches are also not sufficient for AI. It said current management frameworks are fragmented and may not provide the level of validation required for AI. McCarthy Hockey cited a key issue identified during supervision: "AI adoption continues to accelerate, but the systems and processes to control it safely are not keeping up." She added, "The speed at which firms identify vulnerabilities and patch them must become much faster to match AI-accelerated threats."

APRA did not, however, set out a position on introducing additional regulation at this stage. Instead, it said it expects firms to significantly narrow the gap between the strong capabilities of the technologies they use and their ability to monitor and control them. It said key tasks for the banking sector are likely to include improving boards' technical understanding, easing concentration on providers and strengthening checks on software that has AI embedded in it.

Keyword

#APRA #Australia #Anthropic #Claude #AI
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.