
As
Last year, Klarna publicly reported that its OpenAI-based assistant handles two-thirds of customer service chats end-to-end, across 23 markets, doing the work of 700 agents. A bank could not deploy that model to live customer communications without multiple rounds of model-risk sign-off and regulatory clearance on liability.
At the moment, every AI product landing on a bank's doorstep feels like a blank slate. Compliance teams have to map the risks, decide which teams need to be involved, and work out what sign-offs are required. With no common template, the same questions are asked repeatedly, creating delays for buyers and vendors alike. The time lost matters. It drains resources, inflates costs and slows the spread of potentially useful technology.
The lack of standardization in how banks review AI tools is an expensive challenge for both the banks and their vendors. It's creating friction, delays, and uncertainty. Banks and vendors are wasting too much time working out what the process is, what the risks are, and who needs to get involved. And while banks debate definitions, nimble fintechs are moving ahead. This is creating a two-speed phenomenon that regulators have simply not accommodated for.
After years of watching AI emerge, regulators still have yet to deliver clear guidelines for banks and financial institutions. As a result, banks are stuck building patchy frameworks and internal silos that inflate costs and delay products that could improve service for customers.
Frameworks also aren't consistently deployed; an issue that policymakers should be urged to address. In the U.S., some banks are aligning with the National Institute of Standards and Technology, or NIST, AI Risk Management Framework, or RMF. There is talk of others borrowing concepts from the EU AI Act, even when it may not apply to them. We have also seen interest in ISO 42001, the emerging AI management system standard, which some procurement teams have begun adding to their checklists.
A tool cleared under one framework can still face scrutiny elsewhere, creating a fragmented experience for multinational customers. Diverging politics deepen the gap. European regulators are moving toward tighter rules while the U.S. has recently signaled a lighter touch. For a bank trying to roll out a tool globally, this can mean three or four separate risk reviews and three or four different answers. Vendors also face a constantly shifting target as they move from one jurisdiction to another. It's clear we need a central source of truth when it comes to evaluating and assessing the suitability of AI.
In our conversations with banks, the frustration is obvious. Lawyers and compliance teams are being asked to sign off on models that change faster than their review cycles ever could allow. The process moves at a regulatory crawl while the technology runs laps around it. Meaningful adoption of AI will require banks to end their reliance on checklists designed for traditional third-party software, which can't capture the unique characteristics and considerations of machine learning tools. This rings truer for banks that treat every AI tool as high risk regardless of its function, which leads to wasted effort and a growing backlog.
Employees throughout the company can use generative AI to query data from sales calls, email, invoices and a host of other information sources.
Fragmented reviews also make it harder to meet customer expectations. Clients increasingly assume that their bank can deploy modern technology safely and quickly, without realizing the lengthy and scrupulous adoption process. When it takes 12 to 18 months to approve a tool that may double in capability every seven months, the gap between expectation and reality grows.
But the reality is, banks look disorganized because they are being asked to invent governance from scratch. There is no shared definition of AI, no baseline risk tiers and no agreed documentation pack. In that absence, each institution improvises a process, then repeats it for each new tool. That is not prudence, it's just the cost of not having a global regulatory standard to assess a tool against.
This dynamic is showing real consequences. Banks slow their own adoption of tools that could improve performance. Vendors waste money and energy navigating contradictory and ever-evolving requirements. Customers see fewer innovations. Perhaps the most worrying scenario is that business teams begin bypassing compliance altogether, creating a modern "shadow AI" problem that no one is tracking until something goes wrong.
The banking industry cannot afford another lost decade of slow adaptation. Banks and regulators must start designing for innovation instead of defending against it. A more harmonized and transparent approach would save money, reduce delays and improve risk management.
First, regulators must agree on one definition of AI, a set of risk tiers based on autonomy, data sensitivity and explainability, and a basic set of documents that travel with every model (things like intended use, data lineage, monitoring and change controls).
Second, reviews need to move at the pace of the technology, not months behind it. That can be done without losing control by setting clear thresholds for when an update triggers reapproval, keeping a registry of approved models, and time-boxing the review period so decisions are made before the model is outdated.
Finally, there must be cross-border compatibility. A model that has already passed an equivalent review in one major market should not have to start again from zero in another. Other parts of finance already use this logic. Payments and securities rely on passporting. AI should too, if customers are to see the same safe outcomes wherever they bank.
The future of AI in banking will be defined not only by who builds the most impressive tools but also by who can deploy them responsibly, promptly and at scale. If legal and compliance teams can move from ad hoc reviews to a disciplined, transparent approach, they will remove one of the biggest obstacles to innovation and help their institutions compete more effectively in a world where technology knows no borders.





