'It's worth all the hype': SouthState Bank deploys enterprise ChatGPT

Banks have had a range of reactions to ChatGPT and other large language models. JPMorgan Chase, Citi, Bank of America, Deutsche Bank, Goldman Sachs and Wells Fargo have all restricted employees' use of OpenAI's ChatGPT, according to multiple reports. These banks fear sensitive or confidential information will be leaked out to the public system.

But SouthState Bank in Winter Haven, Florida, is enthusiastically embracing an enterprise version of ChatGPT.

"It's game changing," said Chris Nichols, director of capital markets, at American Banker's Digital Banking Conference last week. "It's worth all the hype."

The technology is good at text assimilation, knowledge linkage and summarizing, he said.

"We use it all the time to summarize a set of policies or a regulatory document or to compose emails or to help marketing create copy," Nichols said.

Other banks are also experimenting with ChatGPT-style technology. For instance, Westpac is testing the use of a large language model from Kasisto to assist borrowers and loan officers with the mortgage process.

Human resources departments could use generative AI to vet resumes and point out all red flags a recruiter might be interested in knowing about, said Ryan Favro, managing principal at Capco, in an American Banker podcast this week

"For getting insights and summarizations and taking large amounts of data and breaking it into consumable chunks for humans to benefit from, it's fantastic," he said.

Most banks are interested in using this type of technology, according to Michael Haney, head of digital core at Galileo Financial Technologies, who says 80% of his company's bank customers are figuring out what to do with generative AI.

OpenAI is not the only provider in this space, Haney noted. Microsoft, Google, Amazon and many startups have gotten into generative AI. Oracle and Accenture are building up teams in this area. Virtual assistant technology providers like Kasisto are moving to large language models.

Implementing generative AI in a bank

SouthState Bank, which has $45 billion of assets, trains its enterprise version of ChatGPT only on bank documents and data. No customer data is fed into the system and it's not available to anyone outside the bank. 

The technology does act as a search engine, returning fully written-out answers to questions rather than a list of links, but it goes beyond that, Nichols said. 

"It literally connects knowledge," he said. "That's the game changing aspect of it. It can create a knowledge graph and then intelligently link ideas together to form coherent thought in the form of text." 

Employees can use it to sift through long threads of texts, summarize a discussion and ask if any action items were assigned. It can also take minutes on a meeting a person missed. 

"As you learn about any new topic, it doesn't make you an expert, but it takes a below average person or an average person and ups their game," Nichols said. "We're just bringing on a bunch of interns this week for our summer intern program and we're training them first and foremost on how to use our version of ChatGPT in order to quickly become experts at learning deposits or regulation."

Employees use it to compose emails. It could be used for expense reports, suspicious activity reports or fraud analysis.

"It solves the blank page syndrome if you can't get started," Nichols said. "It's not going to be your final work product, but it gets you going."

Nichols anticipates that about 2,000 of the bank's 5,000 employees will use it. Employees are asking the system questions about the bank's 400-page commercial loan policy and 600-page branch policies and procedures. A new teller who needs to reissue an ATM card will ask the system how to do it, for instance. 

"In our couple months of rolling it out, we get a five to eight X boost in productivity just by saving time," Nichols said. "It normally takes an employee 12 to 15 minutes to figure out the correct answer. That gets reduced to seconds."

It's fun, too: Employees often like to make ChatGPT speak like a pirate. "I'm not sure what that's all about," he said.

The accuracy issue

Nichols acknowledged some of the limitations of ChatGPT and similar large language models like Microsoft's Bard. For instance, they're not good at predictive analytics. 

"I teach a class in AI and all the students say, oh, what does ChatGPT predict for March Madness or for the Kentucky Derby?" Nichols said. "It really sucks at that. Traditional neural networks are much better at grabbing a large set of data and then looking at what's relevant." The bank does not use generative AI for predicting bank performance, branch performance or deposit levels.

If the bank updates its loan policy or branch procedure, it's hard to be sure the system is going to understand the new and get rid of the old. 

"One of the limitations is, these generative AI models don't have a great way of weighting data," Nichols said. "You have to be really careful about that."

Then there is the main issue with ChatGPT and Bard: Sometimes they hallucinate, or make up answers. This means they need human oversight, Nichols said. 

"But on average I'll trust it more than I'll trust a human," he said.

Even within SouthState Bank's walled garden, in which ChatGPT is trained only on the bank's own documents, hallucination is still an issue, Nichols said. 

This is where "the beauty and power and downfall of generative AI reveals itself," he said. "You can ask it the same question two different times and get two different answers."

According to Nichols, most inaccuracies or hallucinations produced by ChatGPT or Bard stem from giving the system incorrect prompts or poorly structured information. Teaching people to ask better questions and structuring documents with descriptive headings improves the results. Testing is also critical.

"You have to understand it first before you start to create regulations and procedures around it," Nichols said. 

Employees can always see the references and citations in the results. They can also have experts verify answers. 

The technology is not expensive to run, Nichols said. It costs about $50,000 to bring it to production, he said, and about $30,000 a month to test and run. 

"If you have 5,000 employees using it and they're five to eight times more productive, that $30,000 a month is nothing." 

The bank now has a risk governance committee that focuses on generative AI.

Overall, Nichols sees large language models as the primary way employees and customers will ultimately interact with platforms. 

"Humans like natural language questions," he said. "That changes how you deliver banking services across the spectrum. I think that is the future to some extent, if we can harness it and can safely use it."

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER