How banks can adopt generative AI

Complimentary Access Pill
Enjoy complimentary access to top ideas and insights — selected by our editors.

Adopting generative AI and large language models will be a requirement for banks that want to keep up with the rest of the world, experts said, but their processes must be responsible and efficient.

Banks are eager to adopt generative AI, such as OpenAI's ChatGPT or Google's Bard, to increase internal productivity and enhance customer experience. But the industry, which tends to approach innovation cautiously, will need a path to successful implementation and evaluation.

Financial institutions are ramping up investments as they search for the best ways to apply the technology, according to a KPMG report released exclusively to American Banker. The data is based on June survey responses from 56 U.S. financial services executives at organizations with more than $1 billion in revenue, as part of a study of 200 leaders across industries.

"There are certainly more questions than answers when it comes to AI as executives assess the use cases, but I think it's pretty clear that bank executives see the transformative power of AI," said Peter Torrente, a banking and capital markets national sector leader in KPMG's audit practice.

Almost all banks are evaluating generative AI opportunities, said Christine Livingston, managing director and leader of AI at Protiviti, but only a handful have actually started applying generative AI and large language models in a meaningful way. 

For banks that want to jump in the generative AI space, here's a list of the top considerations and how to navigate them:

AdobeStock_558672396_Editorial_Use_Only.jpeg
Adobe Stock

What's the use?

Banks that are looking to enter the generative AI space should first research its uses. Experts recommend applying the technology to act in tandem with, not instead of, employees. Protiviti's Livingston said most banks that are deploying generative AI and large language models now are starting with internal uses.

According to the KPMG data, 76% of financial services executives said they see fraud detection as a major application. More than two-thirds of respondents said compliance and risk will be a top use. For instance, banks could use generative AI to automate regulatory filings and analyze historical data to simulate risk scenarios. Generative AI will also likely be used to power more sophisticated consumer-facing chatbots, per 66% of respondents.

Most financial institutions are still in the exploration or ideation phases, researching which practices would be most valuable to them. Major banks like JPMorgan Chase, Wells Fargo and Goldman Sachs, which have been experimenting with the innovative technology for years, are rolling out a number of AI-powered programs.

JPMorgan Chase is beginning to use large language models to detect fraud, by examining patterns in emails for signs of compromise, among other uses. Goldman Sachs is using generative AI to assist software engineers in code development, which Chief Information Officer Marco Argenti said will make those employees "superhuman." 

Ally Bank has piloted an AI-powered program that transcribes and summarizes customer service calls, a job previously done manually by contact center representatives. SouthState Bank in Winter Haven, Florida trained an enterprise-version of ChatGPT that allows employees to use it to query on bank policies, draft emails and summarize meetings.
BOULDER, CO USA - July 31, 2016: The National Institute of Standards and Technologies and National Telecommunications & Information Administration have co-located research laboratories in Boulder, CO.
Adobe Stock

What's the best way to govern AI?

Creating a governance framework around policies, ethics and usage is crucial to deploying generative AI and large language models, experts said. While most banks have existing governance frameworks, they need to account for new risk elements that emerge with the latest technology. Torrente said trust, fairness, explainability, accountability, data integrity, reliability, security and safety are key aspects to establishing policies. 

Most banks are considering ethics as they research generative AI and large language models, Torrente said. Many banks are training models on data like internal policies and bank practices, but excluding any customer information. JPMorgan Chase announced that it hired data ethicists and created a chief data and analytics officer role to manage AI firmwide.

Livingston said it's important to consider the details of the technology when shaping their governance, such as the machine learning models it uses, the type of AI and how it's being used. The context of its application will impact the level of risk.

"If you're using, for example, generative AI to take meeting notes and summaries internally, that's a vastly different risk profile than using AI in a lending function to determine who you will or will not lend to you, or how much you might lend to them," Livingston said.

She added that banks can use existing examples like Google's ethical AI framework or the National Institute of Standards and Technology's AI Risk Management Framework.

Some banks have reportedly banned employees from using ChatGPT, which isn't an effective practice, and could have a negative impact, said Ryan Favro, a managing principal at Capco, in a June interview with American Banker. Banks should instead establish policies around how to use the technology responsibly.
ft-scales.jpg
Orlando Florin Rosu - Fotolia

How do you measure success?

Researching and deploying generative AI and large language models is a major lift and investment for banks. Banks will need to assess and update existing data and architecture, hire and upskill their workforce commensurate with their technology ambitions and continually monitor and evaluate the performance. 

"It's going to be so transformational that I think executives are building it into both the near-term strategy through pilots and through longer-term strategies," Torrente said. "There's always a balance of investing – making progress in the short term, but investing for the longer term returns."

KPMG's June data showed that 42% of financial service executives expected to increase generative AI investments by 50% to 99%, and 41% of respondents expected more than 100% increase. 

This means banks must also establish how they'll gauge the returns.

"It's always a question that we encourage clients to think about before they're even in the proof-of-concept or experimentation phase," Livingston said. "If you've got this backlog of 200 ideas, as you're trying to figure out enterprise applications of those, what are some of the metrics and measures of success, and the ways you're going to measure value?"

Some applications are designed to create process efficiencies, so banks could compare the amount of time a process takes before and after using AI. Others could be quantified through dollar amounts, like revenue creation or saved expenses. 

JPMorgan Chase is aiming to generate $1.5 billion through AI initiatives by year-end. Argenti, from Goldman Sachs, said in May that the bank had seen significant efficiency boosts in coding during the few months it had been testing the application.

"You can easily see at least a 10% to 30% boost of productivity," Argenti said in an interview at the time. "So a superhuman developer could be 30% to 40% more productive. If you map it to the typical IT cost of an organization, especially in our field, that very quickly can add up to $100 million in savings."

Despite the eagerness of financial service executives to invest in generative AI, only 22% of respondents to the KPMG survey said the technology would provide a significant competitive advantage, which could reflect concerns about speed-to-market and potential regulation. The survey showed 40% of respondents think their digital infrastructure is at a "low or medium level of development."
Google Microsoft AWS triptych

To buy or to build?

Banks must evaluate whether they'll partner with a provider to offer generative AI, build their own models in-house or combine the strategies. The major cloud computing vendors, Amazon Web Services, Google and Microsoft, are major players in the space, along with some fintechs.

"I think where you have a technology company that is further ahead in developing the technology, that could be a relatively efficient way to be able to access the technology relatively quickly, rather than building it yourself," Torrente said. "I think this whole notion of being comfortable with your third parties that you're outsourcing with, which is always a hot topic in the banking industry, comes into play as well."

In June, federal regulators released interagency guidance about third-party relationships, which placed the majority of risk management responsibility on banks. Whichever model or method banks use, they'll be on the hook for explaining why and how they chose.

Livingston said she thinks the ability to finetune foundation models for specific use cases is underappreciated. Financial institutions should configure, integrate and adjust a model's capabilities with its specific enterprise data to enhance value, she said.

Ally Bank partnered with Microsoft to build Ally.ai, which bridges GPT-3.5 with the bank's internal applications and data and data security. Westpac, based in Sydney, is using Kai-GPT, a large language model from Kasisto, a fintech that offers AI solutions to banks. Kai-GPT is a large language model trained on conversations and data in the banking industry. Westpac will also tune the technology on its proprietary content to reduce potential for hallucinations, or when generative AI makes up information.

Microsoft has invested $13 billion OpenAI, AWS is funneling $100 million into a generative AI innovation center, Google invested a reported $300 million in generative AI company Anthropic.
job interview hands
Adobe Stock

Where to find the people?

Banks need cross-disciplinary teams to implement and manage generative AI and large language models, factoring in areas like  risk, business application and technology. 

"Hiring will be a piece of the strategy," Torrente said. "Upskilling the existing workforce will be really important as well, given over time, [generative AI] will become more and more pervasive within the various functions in the bank and within the various business lines in the bank."

Livingston said it's imperative that banks allot the resources to employees that can support the innovations.

"I think everyone's going to need to do it, or they're going to be out of the game," Livingston said. "You need to either identify or hire the appropriate resources to support and sustain these innovations. It is going to be part of your core infrastructure in your core tech stack. And I think it will be very hard if not impossible to compete without having AI as a core part of your enterprise architecture."

Livingston said usually the workforce gaps are in technology. Banks need an enterprise architect, a role that can often be upskilled by training someone with a development background, who pieces together components of AI, like applications, integrations and data. Data science skills like tuning and adapting models are often more difficult to upskill, Livingston said, but she added she thinks the financial services industry has a leg up because of the amount of talent in the space with quantitative and mathematical backgrounds.

The KPMG report also showed that financial services executives are more likely to think they have the right people in place to adopt generative AI than executives in other industries.

JPMorgan said in May that it had hired 900 data scientists, 600 machine learning engineers and 200 AI researchers to execute its technology initiatives.
MORE FROM AMERICAN BANKER