- Key insight: Bankwell invites competitors to summit to talk about regtech, and the promise and peril of AI.
- Expert quote: "What we're really talking about here is whether or not AI is going to eat the whole universe and put us all out of a job in the next 12 months or so," said Jesse Silverman, counsel at Troutman Pepper.
- Forward look: AI could be part of virtually everything banks do, but the humans inside these institutions need to control and monitor the process.
Steve Brunner, chief risk officer at Bankwell Financial Group, has a top concern about using AI at his company: data loss prevention, or the fear that information could be compromised.
"If you upload internal documents to ChatGPT, God knows where it goes," Brunner told American Banker in an interview Tuesday at Fairfield University. "You always want to secure the customers' information."
Brunner was sharing his concerns at a regtech "innovation summit" that his New Canaan, Conn.-based bank organized for itself and its local competitors. The topics ranged from regtech adoption to third party risk management, but there was one that really got participants talking: artificial intelligence.
Brunner also worries about fraudsters using AI to create fake identities and "mimic people's voices, their looks, their mannerisms, duplicating documentation that banks typically verify against or rely on, like bill payments, utility bills."
"We're not just taking any random person coming off the street that wants a loan," he said. "We're curating those relationships with our clients, so our bankers know our clients and their business intimately."
At the event, the conversations turned to AI almost immediately.
"What we're really talking about here is whether or not AI is going to eat the whole universe and put us all out of a job in the next 12 months or so," said Jesse Silverman, counsel at Troutman Pepper. "I have no opinion. AI right now – not in the future, not in 12 months, not in 10 years – is absolutely perfect. It works."
Regtech use cases for AI
Bankwell is considering AI technology that could ease regulatory chores such as compliance with the Bank Secrecy Act and anti-money-laundering rules.
"If an analyst were to look at a customer's transactions, export it into Excel, do a pivot chart and take all the necessary steps, AI can essentially do it fast, and provide a transactional summary for the analyst," that can help the human determine whether a transaction or alert makes sense, Brunner said.
Though humans are making the decisions, "if AI could do everything up to that decision making, great," he said.
Christine Conrad, senior vice president of operations and innovation at Middlesex Federal Savings in Cape Cod, said her bank is also looking at using AI for compliance tasks. Her bank also recently bought software that lets employees create AI agents.
"We're always trying to figure out, first of all, is this something we want or something we need?" Conrad said. "Second of all, is it a process that can be solved more efficiently through a different means, or one we already have?"
Silverman said AI could be used almost anywhere. "Look at every single aspect of your business," he said. "Where do you spend the most human time? And I guarantee right now there's an AI solution that will make it more efficient." Examples include underwriting and call reviews – bank compliance officers typically review a sampling of sales calls to see what clients are being told.
"Now, with the same staff, you can choose to review 100% of calls leveraging AI, though you still need that human in the loop," Silverman said. "So if you've got fantasies of AI just replacing everything, please get rid of that."
Test automation is another good use case, said Bill Stoeffel, senior manager at Grant Thornton.
"It's where there's historically been a large reliance on manual processing," he said. "There's generally robust data that's available, and AI is very effective at aggregating and analyzing data, such that a compliance officer can give basic and rote tasks to an AI agent, and you can get pretty robust analysis out of it."
What can go wrong
AI brings risks, despite the fact that machine learning, neural networks and large language models have been around for a long time, Stoeffel said.
For instance, there's the potential for data inaccuracies in large language models used in compliance, he said.
"We're all acquainted with the concept of garbage in, garbage out," Stoeffel said. "You have to understand what the model is doing as it processes its information, so that you understand it's a consistent and repeatable process," so it can be reproduced for auditors. Banks need to assess and review data quality and put data governance mechanisms in place, he said.
About 11 years ago, Silverman tested a machine learning system at a bank that produced adverse action notices for people being turned down for loans.
"At the time, this was brand new," he said. "We had no idea what was going on inside that black box, and it started producing some wild, although in hindsight understandable, adverse action notices."
One notice, for instance, said the bank wouldn't provide a loan because the applicant had not declared bankruptcy.
"The system had plainly decided that for this individual applicant it would have been objectively better for that person to have declared personal bankruptcy," Silverman said. "There are lots of those kinds of challenges. You're going to have to test and test, but they're everywhere."
Working with regulators
Banks should tell their regulators about their AI plans, said Joe Chambers, general counsel and chief of staff of the Connecticut Department of Banking.
"If one of our regulated entities wants to implement an AI regtech solution, we would welcome that engagement," he said.
Silverman agreed banks need to "socialize" their AI plans with regulators. "It's one thing that it works, but it's another aspect that everyone knows and believes that it works," he said. "We're going to be in that cycle for a while, where you're going to be having conversations with the regulators and saying, 'This is what we do, and this is why we believe it works, or in many cases, why we believe it works better.'"
Connecticut regulators are exploring whether to issue AI governance guidance to banks, Chambers said.
"AI can increase efficiency, enhance customer service, provide a regulatory compliance backstop or replace manual processes," Chambers said. "All of those are good things, but the flip side is that using AI can introduce risks. Those risks are heightened with consumer information, data security risks."
His department expects all banks to develop and implement an AI governance framework appropriate to their size and the scope, complexity and data sensitivity of the AI systems used, he said.
AI oversight can't be relegated to the IT staff, Chambers warned. "It has to be from the top, and that means the board and senior management."
Banks also need to have AI usage policies, Silverman cautioned.
"If you don't have AI policies at your bank, you probably have employees who are taking your bank work product and putting it into ChatGPT" or Google Gemini or Microsoft Copilot, he said. "So one of the most important things is getting control of that data and making sure that you've got some agreements in place that you know where that data is going."
Policies need to be enforced in some way, Stoeffel said.
"I definitely agree 100% that folks are utilizing ChatGPT and all of the large language models that are publicly available to analyze data," he said. "And in some ways, that can be appropriate, but in other cases, are you understanding what potential risk exposures come about through the use of those types of tools?" Internal monitoring tools can track this, he added.
Testing is also key. "Regulators are going to be expecting that entities using these tools are appropriately testing their accuracy, and that if they're going to rely on the tools, to ensure that they work at least as well as the old system," Stoeffel said.
Black boxes are out, as bank regulators
"It's got to be explainable and reproducible, meaning you've got to have a document that shows what you did," Silverman said. "The bottom line is, all of those policies and procedures that you have" need to be applied to any new AI tool.
Silverman warned that any use case that involves personally identifiable information may trigger complicated state laws.
"In some states, you have to disclose affirmatively that you're using AI for underwriting purposes, but you don't have to disclose that if you're using the AI to monitor calls," he said. The many conflicting state rules provide "wonderful lifetime employment for me as a lawyer," he joked.






