A third of banks ban employees from using gen AI. Here's why.

False information, job losses, diminishing skills and human interaction, among other concerns, have bankers worried about deploying both generative artificial intelligence, like ChatGPT, and more long-accepted forms of AI like machine learning, according to a new survey of American Banker readers.
Photographer: Gabby Jones/Bloomberg

Though financial services leaders are aware of and hopeful about the benefits of using artificial intelligence to become more efficient and to help employees do their jobs, 30% still ban the use of generative AI tools within their company, according to a survey conducted by Arizent, publisher of American Banker. 

About 15% said they have completely banned the use of generative AI — algorithms that can be used to create new content — for all employees. Another 20% said they restrict use of gen AI to specific employees for limited functions or roles; another 26% said they don't ban gen AI today, but they are considering putting a policy in place. Asked if they're going to loosen or remove employee restrictions on publicly available generative AI tools in the next year, 39% said no; 57% said maybe.

A key thing that holds bankers back from using generative AI is the difficulty of assessing the risk of a gen AI application, according to Chris Nichols, director of capital markets at SouthState Bank, a $45 billion-asset institution based in Winter Haven, Florida, and early adopter of generative AI.

"The mere fact that generative AI does not answer a question the same way each time gives bankers cause for concern," Nichols said. "No model in the human experience has been like that — not traditional AI, not Google search and not any bank model." 

Asked about the long-term risks related to the use of AI in their industry or profession, 26% of bankers said inaccuracies, hallucinations and misinformation. When later asked how concerned they are about the potential risk to their business from using generative AI, 80% said they were somewhat or very concerned about nonsensical or inaccurate information. 

"The lack of fact-checking that occurs," one banker said, was his top concern about gen AI in banking.

"Due to AI hallucinations, oftentimes the information is either erroneous or outdated and company employees just use the data without verifying through secondary sources. Folks are going to take the results as gospel truth ... most gen AI has truth, but also errors," said another banker.

"I am concerned that we don't know sources for the content created and, therefore, it is more difficult to fact check the product for accuracy," said a third.

If a financial advisor asks an AI tool a question and is given results that sound accurate, but aren't, and offers that information to a client, that puts a bank at legal or regulatory risk, pointed out Matt Lucas, field chief technology officer of Stardog. Until a week ago, he was a technology executive at Morgan Stanley.

Some fear employees will rely on it too much. "The content that it spits out is weak — it's college 101 level, vanilla creative writing and uninteresting," one banker wrote. "I am afraid that people will begin to accept this as status quo and we will lose intelligent language, creative and unique thinking, and new and fresh perspectives. It will homogenize everything and people will lose critical thinking skills and creativity."

Other long-term risks bankers cited about generative AI included job losses and weakening of personal relationships with customers. 

Several bankers surveyed said they worry sensitive data could leak out through a generative AI model. 

"You're sending firm information outside of the firewall, instantly that is now outside of our control," Lucas said. "And that's a huge concern."

Even if a bank buys an enterprise version of a large language model and uses it only in-house, a lot of work goes into adopting the technology to prevent risk, Lucas said. 

In the early days after OpenAI made ChatGPT available, executives at some financial firms fed internal documents to it and asked for summaries. Wells Fargo, JPMorgan Chase, Citi and other banks quickly responded by banning or limiting employees' use of it.

It can be hard for a bank to police its employees' use of publicly available and extremely popular large language models like ChatGPT, Gemini and Claude. The IT department can keep the models off of company-issued PCs. On personal devices, it's a different story.

Fear of plagiarism was another concern for bankers. There are tools available that detect plagiarism and that can rank how likely it is that a piece of content was generated by generative AI, Lucas pointed out.

Regulatory risk is another ever-present worry. "Generative AI requires significant oversight, and that is not in place," a survey respondent wrote.

Several cited ethical concerns about AI making decisions without human oversight and knowledge.

To get comfortable with generative AI, bankers need to "drive towards their fears," said Nichols.

"It will take more understanding and experience with generative AI," he said. "Bankers need to go hands on and play repeatedly with various models in multiple situations to derive a level of comfort with them."

Broader AI fears

Even though more traditional forms of AI like machine learning, neural networks and natural language understanding have been used in banking for decades, surveyed bankers still have worries about the rising use of all kinds of AI. 

Most — 70% — say they fear a loss of personal touch with customers, for instance. More than half (57%) of bankers surveyed said they're concerned that AI could introduce new ethical concerns and biases. Just under half (47%) worry about job losses.

Half of the surveyed bankers worry about skills degradation, reduction in critical thinking or analytical skills due to the use of AI, causing a brain drain. 

Some experts object to the idea that AI makes us dumber.

"I would take the opposite side of that argument," said Luis Valdich, managing director of Citi Ventures, in a recent American Banker Leaders Forum. "I think there is an opportunity to leverage these tools to summarize significant information, distill important insights from it, and then use critical judgment and higher level thinking to take that distilled information into second-order implications. If the tool is working as intended, I think that it would require upskilling and higher level skills than the reverse."

Answers that used to take employees 15 minutes to find, they can now obtain in seconds using the enterprise version of ChatGPT.

June 20

Lucas compared the advent of generative AI to the early days of calculators. "People thought, no one needs to learn math anymore," he said. "As a computer science graduate, I can assure you I had to learn math, even with a calculator. So being able to understand how things work under the hood is still critical."

He did note, however, that junior developers used to have to develop basic, even boilerplate, code and they gained skills through that work. Now with the growing use of co-generation tools, that work is going to AI so that developers can work on advanced implementations. Now, financial institutions and schools will need to ensure they teach young developers those base level skills.

And 75% of bankers said stronger guardrails need to be in place to govern the use of AI in banking. 
                   
"I think we, as a whole in the industry, do support regulation," Lucas said. "It's going to allow both the AI providers and the clients to grow together in a safe and ethical way to adopt this new technology."

Finally, lack of AI talent holds banks back from jumping in fast. Asked if their organization is willing to pay more for employees with AI skills, 18% said yes, 30% said not yet, 9% said no and 43% said they're not sure. Large global and national banks were more likely to say they're willing to pay more for employees with AI fluency. 

Most experts agree that as AI models get smarter and are continuously trained, over time, they will get better. For now, banks are deploying them alongside humans. For instance, they're giving co-pilots to programmers to help them generate code and to loan officers to help them evaluate potential small-business borrowers.

"The tools are not quite to the point yet where they're going to take over those types of roles," Lucas said. "Personally, I think that we're many years away from that type of power. It will eventually come, and it's my hope that as the models get much more powerful, that these concerns are addressed."

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology Editorial Research
MORE FROM AMERICAN BANKER