BankThink

Banks should move forward on AI with eyes open to potential risks

  • Key insight: Banks should be actively experimenting with AI. The challenge is how to do so with discipline, judgment and a clear understanding of its limits.
  • What's at stake: AI's weaknesses can be lethal in banking.
  • Forward look: Banks should deploy AI thoughtfully, in partnership with experienced domain expertise and strong technological leadership.

For the moment, the hype around artificial intelligence is at a fever pitch. Some view it as heralding the apocalypse; others believe it will solve nearly every problem imaginable. (If it can improve my golf swing, I'll endorse any or all of the purveyors 100%.)

Processing Content

Having been a senior executive at a large technology firm in the early days of AI and machine learning, I, too, share some of the excitement. We are clearly dealing with a genuine technological breakthrough in today's AI. But my excitement is also tempered by experience. I have seen promising technologies overpromise and underdeliver, particularly when adopted without sufficient domain expertise or discipline. For all the benefits AI and other technological advances have brought, I have seen relatively few that prove truly transformative for financial firms without expert guidance.

Some financial firms, given their sophistication, may be able to be cutting-edge adopters and reap early AI rewards. However, even for large, sophisticated banks, the right strategy is to be an active and informed adopter working with trusted guides or established providers.

Banks should experiment and invest, but carefully. I would be wary of excessive spending, particularly when it is divorced from real operational understanding, and equally wary of discarding proven systems too quickly in favor of untested ones. AI will undoubtedly be a powerful tool, but only one of many technologies institutions can use as a competitive differentiator.

To my mind, there has been too much appreciation of what AI alone can do versus what existing technology companies can do as they enable themselves with AI. This is a mistake I see the marketplace making more broadly.

We are all familiar with AI's current weaknesses: hallucinations, inaccuracies and variability in outputs. In many industries, these flaws may be inconvenient. In banking, they can be lethal. That is why progress is most safely made either by those with deep domain expertise or by leading technology firms thoughtfully integrating AI into their products.

Furthermore, the more consequential risks may be broader. AI introduces new dimensions of cyber risk, increases dependencies on energy and energy infrastructure that may not always be reliable, and raises the possibility that institutions will lock themselves into complex, expensive systems that are difficult to upgrade or replace.

For many reasons, AI is best paired with both external and internal human expertise. Beyond mitigating unanticipated AI weaknesses, human experts and guides can deepen their domain and AI capabilities, improving outcomes. Turning over essential operations entirely to AI not only exposes the institution to unanticipated disruptions but also forfeits the opportunity to develop deeper human and technological expertise.

For smaller financial organizations, there is an additional challenge. I'm concerned many AI-driven solutions will require scale and/or expense community and regional players may struggle to reach. Without thoughtful strategies, this could further widen the gap between large and small institutions. Here, banking organizations should increasingly turn to network-based models that allow firms to achieve a kind of "virtual scale" through collaboration.

The topic of AI implementation came up on several big banks' first-quarter earnings calls. Morgan Stanley CEO Ted Pick said Wednesday that the firm views AI as a friend.

April 15
threeCEOs.jpg

Indeed, I am quite confident nearly every vendor serving the banking industry will either partner with AI providers or embed AI into their offerings. It is too early to know which of these companies will ultimately be displaced by AI-native competitors (and some certainly will be), but the most vulnerable will likely be those that are slow to adopt or resist using AI out of concern it may erode margins.

In fact, many, if not most, established and highly sophisticated technology providers are already incorporating AI in constructive ways. For example, firms in identity verification and fraud prevention are embedding AI directly into workflows to improve real-time decisioning, reduce false positives and better adapt to evolving threats. Similarly, in specialized areas such as lending operations, some providers are using AI to streamline complex processes, enhance risk visibility, and improve the quality and consistency of underwriting and monitoring.

In fairly short order, I believe banks themselves will be significant users of AI-enabled technologies, both through their vendors and within their own operations. For most, this is true today. My advice is that every bank should designate an individual or task force to systematically evaluate where AI can improve functions across the institution, including both internal processes and third-party capabilities.

Equally important, banks must understand these technologies well enough to explain them to regulators and external auditors. That means demonstrating how an AI-driven system reaches its conclusions. Transparency and explainability will not be optional, particularly in areas that implicate core compliance obligations or safety and soundness. Examiners may be willing, for a time, to give banks the benefit of the doubt. That will not last indefinitely. This is particularly true where AI-driven decisions implicate core compliance obligations such as anti-money laundering and fair lending. Here, expectations will rise quickly, and institutions will be held accountable for outcomes regardless of how those outcomes are generated.

I am certain AI performs best when paired with strong domain expertise. The technologists building these systems are often extraordinarily capable. But they are not bankers or regulators, and their results can lack the nuance and judgment that come from years of experience, particularly in a field like banking, where what is not explicitly stated can matter as much as what is. Regulation, in particular, is an area where examiner judgment varies, sometimes meaningfully, from one examination to another and is rarely captured fully in data. AI, at least in its current form, is not well suited to navigating those subtleties on its own.

So where does that leave us?

Banks should be actively experimenting with AI. The challenge for banks is not whether to adopt AI but how to do so with discipline, judgment and a clear understanding of its limits. They should deploy it thoughtfully, in partnership with experienced domain expertise and strong technological leadership. But they should not hand over the keys to the kingdom to AI providers. At least not yet, and perhaps not ever.


For reprint and licensing requests for this article, click here.
Artificial intelligence Bank technology Consumer banking Community banking
MORE FROM AMERICAN BANKER
Load More