BankThink

What happens to your bank when AI systems go offline unexpectedly?

AdobeStock_558672396_Editorial_Use_Only.jpeg
As artificial intelligence is integrated into more and more core banking operations, bank boards of directors need to make sure business continuity plans account for the possibility of AI system failures, write Sandra Galletti and Steve Goldman, of MIT.
Adobe Stock

Artificial intelligence is becoming central to many core banking activities, powering systems used for fraud detection, early risk sensing, credit decisions, anti-money-laundering surveillance, and a growing set of customer-facing services. This level of adoption reflects an increasing reliance on these systems, and raises a critical question for bank boards: Can their institutions keep operations running if these systems fail?

Processing Content

A 2025 American Banker survey found that AI is rapidly expanding into risk, compliance, and financial analysis, with 70% of big-bank respondents reporting chatbot use and 63% adopting biometric tools. At HSBC, AI is used to analyze hundreds of millions of transactions per month in their fight against financial crime. JPMorganChase has deployed generative AI tools to over 200,000 employees, while Bank of America's virtual assistant "Erica" handles more than 58 million interactions each month.

Some institutions may be more dependent on AI than their leaders realize. On paper, continuity plans look robust, detailing manual reviews or alternate processes, fallback queues, rerouting rules, and escalation protocols. Still, are banks sure that, in practice, those paths could be operated at scale with the staffing and skills available today? AI-enabled systems are often used at points in the workflow where large volumes of activity are filtered, scored, flagged, or prioritized before staff see them, and earlier manual or rules-based pathways have been reduced as processes were streamlined around those capabilities.

If an AI system becomes unavailable or its performance degrades, the operational impact can be significant. Fraud detection pipelines can stall, exposing the bank to higher losses, or anti-money-laundering monitoring can miss suspicious activity that would normally be flagged. In credit activities, loan approval processes can freeze, disrupting revenue flows and delaying decisions for customers. In some cases, systems remain technically available, but concerns about output reliability can prompt risk and compliance teams to suspend or restrict their use.

From a continuity perspective, having a clear map of critical models and data flows, including their operational dependencies and any components operated by key third-party providers, can help banks understand how long specific AI systems can remain offline before customer or regulatory impact becomes material. A point worth considering for boards and executives is whether their confidence about operating without a particular AI system is supported by testing and exercises conducted under realistic conditions.

CodeBoxx Academy is filling a void for banks and other companies that desperately need AI experts. Peret's time behind bars uniquely informed how he runs the school, he says. 

December 25
Brian Peret, director, Codeboxx AI Academy

Supervisors are placing growing emphasis on operational resilience and continuity. In the United States, existing supervisory frameworks already cover AI systems through expectations for model and third-party risk management: SR 11-7 sets expectations for board oversight of model risk management, while the 2023 Interagency Guidance on Third-Party Relationships emphasizes that banking organizations remain responsible for managing risks in their use of third parties, including those involving critical technology and AI service providers. Both of these supervisory references are highly relevant to banks' continuity planning for AI systems.

International supervisors are moving in a similar direction. The European Union's Digital Operational Resilience Act requires all regulated financial entities, including banks, insurers, and payment institutions, to maintain a digital operational resilience testing program and a harmonized information and communications technology, or ICT, incident reporting framework. It also grants the European Supervisory Authorities direct oversight of designated critical third-party ICT service providers. These actions suggest that supervisors are prepared to treat digital and third-party resilience as a core regulatory concern, including services that rely on AI.

As AI becomes more closely connected to essential banking workflows, continuity thinking becomes an important part of board oversight. This means that processes that depend on AI systems are brought into existing continuity discussions and exercises, so that their operational dependencies and fallback options are understood as clearly as those of other critical systems.

For bank boards, the path forward rests on acknowledging that continuity planning must evolve as AI is integrated into more critical processes. Preparedness is ultimately what will keep the bank running even if AI systems are disrupted.

For reprint and licensing requests for this article, click here.
Artificial intelligence Risk management AML Consumer banking
MORE FROM AMERICAN BANKER