If there was ever a job for artificial intelligence, it's this one: winnowing down the hundreds of thousands of false positives generated by banks’ anti-money-laundering efforts.
Software can analyze data much faster than humans can, and it can sift through masses of transaction data quickly to spot the likeliest signs of serious financial crime.
But a funny thing has happened on the way to what should be an AI-led revolution — banks have been worrying what their regulators would say if they filed fewer suspicious activity reports, especially if their rivals continue to submit far more.
“There is a culture where regulators will go in and say you’re not filing enough SARs,” said Micah Willbrand, director of AML at NICE Actimize. “They might say, if you’re a $10 billion bank, you should be filing 10,000 SARs a year like your peers.”
The mindset is “when in doubt, file a SAR,” said Kareem Saleh, chief of staff at ZestFinance, a fintech that has created AI-based systems for making credit decisions and spotting unusual financial activity.
“All the banks are worried that if they use machine learning, the number of SAR filings will go way down and the regulators will say, what happened?" Saleh said. "How come your SAR filings fell by 50%? Maybe there’s money laundering you’re not catching.”
Therefore, he said, banks don’t want software to model whether there’s actual money laundering, but whether a human would have filed a SAR.
This obviously defeats the purpose of using artificial intelligence.
Cathy Bessant, chief operations and technology officer at Bank of America, expects this issue will get resolved over time.
“The intent of anti-money-laundering and economic sanctions work is to catch bad guys, so anything that produces a better outcome has to be pursued,” she said. “The reason false positives are a problem is they distract activity away from bad guys. So we have to get better at using data and modeling and artificial intelligence to do that.”
Bessant suggested the technology needs to be explained properly to regulators.
“If we stay focused on the objective, which is crime detection and prevention, and we can prove the efficacy of what we’re doing, that argument typically wins,” she said. “Regulators are as zealous as we are about catching bad guys, if not more zealous, and more worried about the risk of missing something, because the political pressures around that are too high. So it’s very important to pursue it because information can only be good in this regard.”
A Singapore bank's experiment
OCBC Bank is one company U.S. banks and regulators could keep an eye on. The Singapore bank, which is slightly larger than a BB&T or State Street in the U.S., has been testing ThetaRay’s AI-based AML software for several months. It has seen a nearly 35% reduction in the number of alerts generated.
Other providers of AI-driven AML software include Zest, Merlon Intelligence, QuantaVerse and Attivio. In February, Nice Actimize released a new version of its AML software that has a machine learning component that does anomaly detection and clustering.
“We felt that was quite a good result,” said Loretta Yuen, group general counsel of OCBC. “We also noticed during the proof of concept that the accuracy rates of identifying suspicious transactions increased by more than four times when we were using the software.”
The software uses an algorithm that is not reliant on an exhaustive set of rules, she said.
“The algorithm goes into the data environment, and it intelligently looks for and detects anomalies in transaction behavior,” Yuen explained. “It does so by accessing broad parameters such as our products, our customers and their risks, and together with the other diverse data sources that it has, the algorithm arrives at a holistic and contextual data analysis. That helps a lot because it’s no longer driven strictly by rules.”
The AML rules themselves generate bad leads.
“The AML systems used by U.S., European and indeed most banks around the world are rules-based, meaning they generate multiple alerts that have to be manually trawled through to ascertain which of these transactions need to be further reviewed for any possible financial crime,” Yuen said.
“It’s very time-consuming," she said. "It can take days or weeks, especially for complex transactions. The demands of AML monitoring have become really heavy. The one thing we want to move away from is this rules-based approach that doesn’t account for potentially suspicious transactions that these systems cannot detect because they were not captured in a predefined set of rules.”
For instance, an AML rule may be to flag all transactions above $1 million. That’s not a true indicator of money laundering.
The ThetaRay software looks for anomalies. If a customer has never conducted a $1 million transaction before, then the system will catch it.
“One of the more exciting benefits we are looking forward to is the potential of this technology to detect previously unknown patterns, and this will deepen our understanding of financial crime and how to prevent our bank from running afoul of AML laws and regulations around the world,” Yuen said.
Some U.S. bankers have wondered if unsupervised machine learning could catch innocent people in a dragnet. For instance, if my cousin is in ISIS and has done some money laundering (for the record, she hasn’t) and I have sent money to her, a software program might think I’m part of her network of illicit activity.
Yuen does not see this as an issue because the technology does not look at just one red flag, but at a range of parameters, including products, customers and risks, to sift bad guys from good ones.
At the bank, internal analysts will watch the software closely to ensure the technology is performing consistently and as intended in all situations.
OCBC hopes to take the software live later this year. It will run it in parallel with the bank's existing system for a time.
'Finding the unknown unknowns'
According to James Heinzman, executive vice president of financial services solutions at ThetaRay, 99% of AML alerts are false positives at many banks.
ThetaRay’s founders, Amir Averbuch, a professor of computer science at Tel Aviv University, and Ronald Coifman, a professor mathematics at Yale University, developed its algorithms over the course of ten years, he said.
“Unsupervised machine learning says we don’t know what we’re looking for, we want to find things that are anomalous, things that are weird,” Heinzman said. “It’s finding the unknown unknowns. There’s no predefined model, no training — it’s recognizing anomalous behavior within huge data sets. We look at every field in a data set and compare to every other field.”
Rules-based systems no longer work, in Heinzman’s view, because the world is changing quickly, criminals are becoming more sophisticated, and the types of attacks that are occurring are things that have not been seen before.
“Computers are smarter than people are," he said. "They can find things in data that people can’t find."
Editor at Large Penny Crosman welcomes feedback at firstname.lastname@example.org.