Westpac is testing ChatGPT-like AI, minus the hallucinations

David Walker is well aware that large language models such as ChatGPT, which was trained on the entire internet, can hallucinate. They can even make up historical events that never happened.

"They can tell lies, they can make up information," said Walker, who is chief technology officer of Westpac, in an interview. "They're incredibly powerful."

The bank, which is based in Sydney and has more than 12 million customers, can't afford to let a public version of ChatGPT make up or hallucinate answers for customers or employees who use a virtual assistant. GPT (generative pre-trained transformer) models are artificial neural networks that are pre-trained on large data sets of unlabelled text, and able to generate humanlike content.

But Walker does want to give employees and customers the ChatGPT experience of humanlike answers to their questions — if it can be done safely, with assurance the answers are accurate. 

The bank is working with Kasisto to test its Kai-GPT, a large language model trained only on conversations and data in the banking industry. 

"Hallucination in public AI models is unavoidable and can get pretty bad," said Zor Gorelov, CEO of Kasisto, in an interview. This is why banking GPTs need accuracy, transparency, trust and customizability, he said. 

This is also why banks like Westpac will focus on internal use cases for generative AI — giving it to front-line bankers, contact center agents and mortgage workers, Gorelov said. Westpac will train Kai-GPT on its proprietary content, and thereby dramatically reduce the risk that the system will hallucinate, Gorelov said. 

Walker hopes to provide more complete and more conversational help to customers and staff, for instance, in the mortgage lending process. 

"When people apply for a home loan, they have to fill in lots of forms," Walker said. "We need to know who you are, we need to know all kinds of things about you. This is going to aid us in checking the quality of information coming in, so it's going to stop us having to go backwards and forwards to our customers. It's going to streamline the process. It's going to help our customers, it's going to help our lending staff, and it's going to make things much more straight through and seamless."

Other banks are likely to do similar experimentation over the next two years, according to Peter Wannemacher, principal analyst, digital banking at Forrester.

"Specialist tools built on top of a large language model will be launched by vendors, traditional financial institutions and fintechs," Wannemacher said. "Most traditional financial institutions will start by focusing on employee-facing generative tools, rather than exposing a chatbot built on top of a large language model directly to the end user." 

But he also thinks banks will proceed with caution. 

"Large language models have suddenly become both better and widely utilized, but they still fail spectacularly and can even generate totally wrong, even fraudulent outputs," Wannemacher said. "Money is a highly sensitive area of people's lives, and traditional banks will rightly resist launching anything customer-facing until they have a much better sense of what can go wrong and how to address it." 

To prevent Kai-GPT from answering a question based on information from another bank that doesn't pertain to Westpac, Walker is using what he calls layering. One layer of the model is trained on data and conversations from many banks. Another layer is trained on information specific to Westpac, such as its policy documents, forms and websites and recordings of conversations in the bank's contact centers. 

"As it formulates an answer, to work out the intent of the question, it will draw on that industry layer," Walker said. "It's got the knowledge of all those conversations from all those banks and it'll be smarter because of that. But it'll draw even deeper down into the Westpac-specific model when you're talking about terms of a home loan or a deposit interest rate. Those layers work together to formulate these really rich, wonderful answers, but in an accurate and concise way." 

Using as much data as possible gives a richness and precision to answers, Walker said: "It's still a matter of identifying what you want to train on and what knowledge you need the GPT engine to understand." 

The bank is moving slowly for now to ensure the new technology fits within its responsible AI policy and "how we think ethically about protecting our staff and our customers," Walker said. "We want to make sure that we don't run ahead too fast and throw something out there that could do harm. We have the principle 'do no harm.' It's sort of fundamental."

The first go-live of Kai-GPT at Westpac will be in mortgage operations. Over the next few months, the bank will workshop the use of the technology in the loan application process to help borrowers know what forms they should use and what information the bank needs to receive, which should help speed up the process for the bank.

Once Walker's team feels confident about Kai-GPT's ability to help employees and customers and do no harm, he thinks he'll be able to quickly deploy it to other areas of the bank.

The key advantage of a large language model over earlier generations of chatbots in use at Westpac is the richness of the answers it can provide, Walker said.

"It provides an answer in a way that's more like a human talking to a human, so customers or employees feel like they're getting the information they need rather than just sharp one-liners," he said. "We think this is quite a game changer when it comes to this next generation of working with artificial intelligence." 

Westpac already uses Kasisto's Kai software as an orchestrator of other chatbots the bank uses in areas like service management, human resources and risk management. If an employee can't remember which bot to go to for information, he or she can go to the orchestrator and get routed to the right chatbot.

The Australian bank has co-developed a portal to all the knowledge trapped inside multiple virtual assistants throughout the company.

May 31

"We thought that that was a very powerful way to handle conversation and we've found that really useful," Walker said. "It's a one-stop-shop entry point."

Kai-GPT was trained on Kasisto's own data, data from other banks Kasisto works with and information gleaned from financial websites, SEC filings and other sources. 

"Our goal is to create the best large language model in the world designed for banking and financial services and achieve what we call artificial financial intelligence," Gorelov said. "We feel that our job is to help our customers of all sizes to have the highest performing large language model that is designed and built for banking that provides accurate responses and knows more about banking than most bankers do."

Kai-GPT is transparent, Gorelov said, in terms of the data and methodology used for its training.

"It is trusted, because we've worked with banks over the past 10 years," he said. "We know how precise they are, how demanding they are when it comes to personally identifiable information and proprietary content."

The program is also customizable, so banks can inject their own content and make it work better on their own data sets.

The bigger the data set and the more questions a large language model is capable of answering, the more important and difficult to enforce guardrails become. 

"The world went from prescriptive AI, where every intent, every response needed to be designed manually to generative AI, where you no longer need to anticipate every user's question and retrain the model when something new comes up," Gorelov said. "It's a different world we live in and we're quite excited about it. But guardrails and the AI protection, transparency, visibility of sources, those issues become more and more important."

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER