Can AI spy financial crime without implicating innocents?

I was talking to a banking compliance executive recently about how banks are looking to use artificial intelligence to spot clues to crimes being committed by customers or employees.

This executive was clearly not buying into the hype.

“We’ve thought about that, but we don’t plan to use it at this time,” she said. “There’s too much risk of innocent people getting caught up in a dragnet.”

An AI engine could find a pattern of transactions or behavior among law-abiding customers that mimics money laundering or some other crime. A program that analyzes social media networks might see that I have a cousin who lives in Iraq and is a member of ISIS and that I once co-owned a business with him (that’s a hypothetical example), and be flagged as suspicious.

Banks also worry about losing the human element and about how regulators will view their use of such new technologies, especially with new regulations like the New York State Department of Financial Services’ new anti-terrorism rules, which require banks to validate their concerns.

Proponents of AI scoff at such concerns.

guilty-innocent-wide-73757889-adobe.jpg
One Guilty ball in a pyramid of spheres marked Innocent to illustrate a suspect in a trial, decision, verdict or judgment

“It's almost like saying you don’t want to use computers because you might get electrocuted,” said Solon Angel, founder of MindBridge, a company that uses AI to analyze financial statements and transactions for evidence of something untoward going on. “Yes, if you put your laptop into a bathtub full of water while it's plugged into the socket, you can get electrocuted if you jump into the bath. Last time I checked, people don't do that, and any sane person would not jump into a bath full of electricity.”

Thomas Bock, executive managing director and leader of K2 Intelligence’s Regulatory Compliance practice, noted that as with the rules-based systems banks use today, alerts don’t necessarily lead to investigations. If an anti-money-laundering system identifies three transactions that look suspicious, that doesn’t mean that a bank is going to file a suspicious activity report.

“It just means hey, this is unusual, I need to take a deeper dive and look at it,” Bock said, adding that AI doesn’t take away the need for a professional to investigate.

“It will present an opportunity to dig deeper,” he said. “The bank will have to make a decision: whether that was an innocent person and we understand the reason for that transaction so we don’t have to file a SAR or hey, we thought this person was innocent but they’re transacting with people connected to terrorists in Yemen.”

Bock said the bankers he talks to are interested in using AI in spotting crime, detecting fraud and complying with anti-money-laundering laws.

“Because they struggle with the necessary skill sets to investigate, to keep people in seats to investigate transactions appropriately, they really want technology to limit the cost of compliance, as it continually climbs,” he said.

Given the regulatory environment, banks are also cautious.

“Banks have been reviewing transactions for a certain way for a very long time, even prior to the passage of the USA Patriot Act in 2001,” Bock said. “I believe that the forward-thinking risk professionals in each bank will start to gravitate toward leveraging different technologies that will help them identify fraud and money laundering quicker.”

The argument for AI in financial crime detection

Traditional AML programs, surveillance programs and other software that looks for odd behavior that might be foul play tend to come up with high volumes of false positives. Natural language processing, machine learning, neural networks and other types of AI can comb through huge volumes of real-time data in a way that humans can’t, pick up on patterns humans may not have the capacity to see, and help narrow down the red flags to a smaller number of truly alarming transactions. In other words, AI spots the signal in the noise.

“Looking through large volumes of data is a great undertaking. It’s not easily done with the human eye,” said Omer Khan, director in K2 Intelligence’s Regulatory Compliance practice.

Bock said AI could also help identify problems by pulling in data from social media, negative news or public records.

Thomas Bock, executive managing director and leader of K2 Intelligence’s Regulatory Compliance practice.

“Pulling different, disparate data sets together, identifying potential relationships or linkages, that probably would not exist if you just looked at one single wire transfer between two parties,” Bock said.

Credit card companies have been using neural networks and AI to identify potential fraud for years, Bock noted.

“Even the hijackers back in 2001 were triggering alerts, because they purchased first-class, one-way tickets,” he said. “We’ve all gotten emails from our credit card companies saying this was a purchase, was this you? That’s because they have sophisticated engines that are learning from our behavior. Where we spend, what we spend, how much we spend, what locations. I just got one over the weekend. Unfortunately I get them all too often.”

And far from implicating the innocent, AI could help make sure, when a transaction or negative news is flagged, that the right person is identified as behind it.

As the skeptical banker said, there is still some worry that innocent people could get fingered as doing something nefarious. That is clearly something AI firms will need to address.

“It sounds like a relatively simple thing, but when you add up all the people who are conducting transactions that have the same names or slightly different spellings of names, it’s a significant problem,” said David McLaughlin, CEO and founder of QuantaVerse. QuantaVerse is one vendor of AI software for financial crime detection; others include MindBridge; IBM Watson; Digital Reasoning; Palantir; Sybenetix, which has just agreed to be acquired by Nasdaq; Neurensic; and Merlon.

QuantaVerse takes the identity verification data banks already get from providers like Thomson Reuters, LexisNexis and FactSet and analyzes it in aggregate.

“Those databases are great, but they have different pieces of information about individuals and entities,” McLaughlin said. “You have to look at the whole landscape of digital clues: Does this person have some adverse media about them? Is there information on the Deep Web that would indicate there’s risk around the person? … By using machine learning capability, you can begin to put pieces of the puzzle together, and the more items of truth you find that confirm an identity, the more your confidence level can go up.”

My hypothetical cousin in ISIS would be just one piece of the puzzle, McLaughlin explained.

“If you combine that with transactions, with topology that indicates terrorist financing, and then you find some transactions that there’s no economic justification for, now all of a sudden your case is becoming more interesting,” he said, adding, “It’s a mosaic you’re examining.”

Will regulators soon use AI?

Although some banks worry about what their regulators will think about the use of AI in crime sleuthing, some regulators are experimenting in the space themselves. U.K. regulator Bank of England, for instance, is piloting the use of MindBridge to analyze financial records for signs of fraud and wrongdoing.

The central bank won’t comment on the pilot yet, but Angel said the software will help regulators detect problems such as Madoff-style fraud.

“How do you assess the viability and the trust in capital markets as a regulator?” Angel said. “That’s quite essential. They need to ensure there are not too many Ponzi schemes, they need to ensure that the financial institutions have stability and are solid so you don’t have another 2008.”

AI can make effective assessments of what’s going on in the marketplace without that prior context, Angel said. MindBridge software analyzes a company’s books and finds anomalies.

Some have questioned whether banks’ efforts to catch criminals are worthwhile. NSA whistleblower Edward Snowden recently said “Know your customer” laws do “bupkis” to stop terrorism.

Bock disagrees. “I think banks have their challenges because of the sheer volume of data and information that flows through the financial system on a daily basis,” he said. “A lot of banks struggle, but it’s not due to a lack of effort. For every bank it’s one of their top priorities. We probably wouldn’t be in business if it wasn’t.”

Dan Stitt, director of financial crime analysis at QuantaVerse, said the AML program is making a difference.

“I can tell you we are reporting a lot of SARs, a lot of terrorism to the government weekly and daily,” he said.

Some have even questioned whether banks should be deputized to do law enforcement at all. Others say there’s no viable alternative.

“Are we realistically going to take every transaction that goes through every financial institution and give that to Fincen, which doesn’t have a budget to look at it and probably would have the same challenges around finding human capital that can do it?” McLaughlin said. “Banks utilize our financial ecosystem that our society has set up to make profit. We should hold them accountable. I don’t think it’s asking that much for them to do that.”

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Artificial intelligence Risk management Fraud Bank technology BankAI Conference
MORE FROM AMERICAN BANKER