Grasshopper Bank finds compliance time saver: Generative AI

"At Grasshopper we're treading lightly into AI, exploring it," said Christopher Mastrangelo, chief compliance officer at the bank, in an interview.

A common lament among compliance leaders at banks is that there's never enough time or people to thoroughly suss out the many red flags of money laundering, fraud and other forms of financial crime.

Grasshopper Bank, a New York financial institution with $733 million of assets, has given its compliance team an AI-based assistant to help with this work. The bank is using generative artificial intelligence from Greenlite to conduct the enhanced due diligence and monitoring of customers called for by the Bank Secrecy Act. This regulation requires banks to have controls in place and provide notices to law enforcement to deter and detect money laundering, terrorist financing and other criminal acts.

This use case comes at a time when banks are increasingly getting hit with consent orders for their fintech partners' BSA compliance shortcomings and pressure mounts to handle BSA compliance, along with most other tasks, more efficiently. In an October survey of bank compliance officers conducted by Hummingbird, for instance, 93% said efficiency is a top concern at their company, and 87% said they're constantly under pressure to increase efficiency.

"It's good to find and create efficiencies, instead of having a human being going out and taking hours to pull in what a computer can pull in in five minutes," said Becki Laporte, strategic advisor for fraud and AML at Datos Insights, in an interview. "There's a huge benefit to that."

Banks have used traditional AI, such as machine learning and neural networks, in anti-money-laundering and BSA compliance for several years. Using generative AI — predictive models that generate content — in such compliance is new territory. 

"At Grasshopper we're treading lightly into AI, exploring it," said Christopher Mastrangelo, chief compliance officer at the bank, in an interview. He was formerly a bank examiner at the Office of the Comptroller of the Currency and the Federal Reserve. "The partnership with Greenlite fit what we're looking to do with AI, which is to create efficiencies in our process." 

At Grasshopper, as at most banks, customers are risk-rated based on their activity, and the bank conducts periodic analysis of those with the highest risk ratings to meet BSA requirements. For instance, international companies with foreign transactions will typically have higher risk ratings. The enhanced due diligence includes a review of the nature of the business, its activities, its website location and any geographic risks. 

The generative AI model "doesn't remove the analyst from the equation," Mastrangelo said. But it does gather a lot of publicly available information and lets analysts make more informed decisions, he said.

A human analyst doing enhanced due diligence on a small-business client — say, a bakery — spends a lot of time collecting data from different sources, according to Will Lawrence, CEO of Greenlite. 

"The first place they're going to go is the internet, to find open-source information about that bakery," Lawrence said in an interview. "They're also going to look at the internal documentation that the bank has about the bakery to make sure that their affairs are still in order and legitimate."

The analyst might look at the bakery's transaction patterns to make sure they make sense for a company of that size and type. 

Greenlite's generative AI model would collect additional data and summaries about that bakery from many sources, including corporate registries, news articles, company websites and social media channels.

Using AI to take some of the routine data collection work from investigators lets them focus on the most challenging, most complicated and most undetectable sources of fraud, Lawrence said. 

"We're providing that organization leverage by taking some of the very manual tasks from their plate," he said.

Grasshopper is also using the technology to help vet new clients. As the bank does its onboarding due diligence, by collecting documents and conducting fraud reviews of potential new customers, the large language model helps gather data from additional public data sources. Greenlite produces a report that answers certain predetermined questions about the business, for instance about the geographies it operates in, its source of funds, its activities and its beneficial owners of the business. 

This is on top of the know-your-customer check the bank does when it onboards new customers. Fraud checks are also done separately, using a system from Alloy. 

This latest generation of AI powered by large language models is much better at parsing through unstructured data than older versions of AI, Lawrence said. It can look through a document and identify anything that doesn't match up with what the customer said at onboarding. It can analyze transactions to see if they line up with what the customer said about its expected activity.

Greenlite's technology orchestrates across several models that are good at different tasks, Lawrence said. Some large language models, like Anthropic's Claude, are strong at document processing and reading through large amounts of text. Others are better at math, and still others, like OpenAI's GPT-4, are better at reasoning.

There are risks, as always, to using generative AI in compliance. In addition to the risk that a large language model will "hallucinate" or completely make things up, there is the risk that they could pull information from websites that have been fabricated. 

"A human being might catch that, but a computer is not necessarily going to catch that," Laporte said.

To prevent hallucinations and fabrications, Greenlite maintains an audit log of all sources used in any given investigation. The source for every piece of data in a Greenlite report is cited, Lawrence said. 

And the model doesn't make decisions, he said, it only provides information to human analysts who then use their own logic and analytical skills to draw their own conclusions and make a decision.

It's early days in the Connecticut bank's testing of Cascading AI, but one clear benefit has emerged: The software can handle the many inquiries that come in on Friday nights, after loan officers have gone home.

February 8
Female small business owner looks at phone

Banks that use generative AI models in compliance need to make sure they test the system and not blindly rely on it, Laporte said. They need to document this testing and they need to be able to explain how it works to their regulator. 

In Grasshopper's case, the Greenlite software is not only helping with efficiency, it's providing more reliable and comprehensive data than human analysts typically can come up with, Mastrangelo said.

Deployment took a few months and involved making sure the software worked well with existing systems and connected to the right data sources. 

The software has reduced the amount of time it takes to do an enhanced due diligence review by close to 70%, Mastrangelo said. 

Laporte sees other banks talking about using large language models for this same use case, but hasn't seen other banks actually using them in production yet.

"My perception is banks want to be ready for this, but they don't necessarily want to be first," she said. "I hear buzz and discussion, and I think there's an appetite for it."

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER