When AI suspects money laundering, banks struggle to explain why

AdobeStock_640041145.jpeg
As financial institutions increasingly rely on artificial intelligence for anti-money-laundering and fraud detection, they face challenges in meeting regulators' demands for complete transparency and documentation.
Adobe Stock

In recent years, regulators have warmed to the idea of banks using artificial intelligence to comply with anti-money-laundering laws and related regulations for countering the financing of terrorism and preventing other financial crimes. Increasingly, banks have also used this same transaction monitoring technology to protect themselves from fraud.

Using AI to monitor transactions for financial crimes comes with one main challenge, though. Regulators expect banks to interpret their models and explain each report of suspicious activity they flag, but many artificial intelligence models are black boxes. How can a bank explain to the Financial Crimes Enforcement Network how it determines which transactions are suspicious when the AI can't explain itself?

The answer and regulators' expectations are muddy, but, in general, banks need to keep records of how they train their transaction monitoring AI and their process for adjusting any thresholds they use for flagging transactions, also known as maintaining an interpretable model.

Banks and regulators are still debating what constitutes satisfactory documentation and reproducibility in their transaction monitoring systems, and while regulators have lower expectations of their fraud detection systems, legitimate customers who get their accounts closed over potential fraud do demand answers.

Banks are responsible for providing "complete explainability and traceability" to regulators, according to Ashvin Parmar, global head of insights and data for financial services at consulting firm Capgemini. In the past, they accomplished this with rules-based systems for flagging transactions as fraudulent.

"This approach was intuitive and left clear logs indicating which rules were triggered in the decision-making process," Parmar said. "However, when dealing with a large number of policy rules, into the hundreds, several challenges emerged."

One such challenge is managing and updating a high volume of rules, which is a cumbersome and time-consuming process. This hinders banks' ability to adapt to evolving fraud patterns and regulations. The other challenge is high rates of false positives — erroneously flagging legitimate transactions as suspicious.

Fed To Plow Ahead On Half-Point Hikes Undeterred By Stock Slump
AML

The agency recently updated a policy document to include technological innovation as a top priority and said it was considering creating a safe harbor for those who develop new products to fight financial crimes.

May 26

On top of this, banks are required to show evidence that they have tested their transaction monitoring models with varying inputs and at various thresholds. This helps validate that the model is detecting the suspicious behavior that it is supposed to, according to Brian Baral, head of global risk management at consulting firm Genpact.

"The challenge is that some AI systems and models will produce different results as the thresholds/variables are constantly changing as the model learns," Baral said. "Validating the model and creating model validation documentation theoretically would need to be repeated every time something in the model changes, which presents a challenge."

This has created a debate between banks and regulators on what constitutes satisfactory documentation and reproducibility as models get continuously fine-tuned, Baral said. Should validation happen one time on the base system, or on an ongoing basis every time the underlying model changes?

"Across the compliance industry, regulators and banks are struggling with how to evaluate AI," Baral said. "There are differences not only with regulatory agencies (e.g., OCC, FINRA) but also with individual regulators that may be assigned to a specific institution."

Before they attempt to deploy AI, banks need to interpret their models for regulators by bringing them along with the model training and retraining journeys rather than just showing them the end product, Baral said.

At least, they should do so when it comes to compliance tasks like anti-money-laundering. For systems that identify potential fraudulent activities, regulators regard those activities as loss reduction rather than compliance activities, so they do not typically require banks to explain their models or outputs.

Multiple vendors offer transaction monitoring systems, each with varying types and levels of AI involvement, including NICE Actimize, Verafin, ComplyAdvantage, Quantexa, Thetaray, Manta (now an IBM company) and Feedzai. Google also began offering anti-money-laundering AI this summer.

Another company offering AI-based transaction monitoring is Featurespace, which offers anti-fraud and anti-money-laundering products that attribute each decision to underlying risk concepts by assigning a weight to various concepts (transaction amounts, frequency, provenance and many others) in the decision process. The company has published multiple research reports on explainability. It also provides interpretability, according to David Sutton, chief innovation officer for Featurespace.

Sutton said that an interpretable model has a paper trail to explain what went into it, how it was trained, what algorithms were used and other important factors to the building and retraining process — all key information that regulators want to get from banks about their models.

"Explainability on the other hand is about ensuring that for every prediction, we are explaining the risk factors behind this specific prediction," Sutton said. "Systems achieve this using explainability algorithms that attribute predictions to underlying modeling concepts (aka features)."

Current AI transaction monitoring systems have varying levels of explainability, from highly explainable decision tree systems (which work much like an automated flowchart) to less explainable neural networks (which tend to get the "black box" label).

For models that make less explainable decisions, there are techniques that can help make some sense of what is going on inside the model, but this ability is still largely under development, according to Matt Hansen, chief product officer at cloud banking platform nCino.

"We're still in the early days of how banks are actively using AI solutions, but right now these use cases often focus on using these tools adding to the efficiency of decision making by human bankers, not making decisions itself," Hansen said.

For reprint and licensing requests for this article, click here.
Artificial intelligence Fraud prevention Technology Money laundering
MORE FROM AMERICAN BANKER