Can AI help when a scam is invisible to the bank?

Complimentary Access Pill
Enjoy complimentary access to top ideas and insights — selected by our editors.
When customers are targeted directly in scams that are invisible to a bank, artificial intelligence, and in some cases generative AI, can play a role in fighting the fraud, experts say.

Last week, New York's Attorney General sued Citi for allegedly failing to detect scams that bilked customers out of tens of thousands of dollars and for not reimbursing those customers. Cited cases involved scammers sending text messages to customers that directed them to websites where they gave up online banking credentials. That let the fraudsters commit account takeover and send large wire transfers from the customers' accounts to accounts at other banks.

To what extent could AI be useful in cases like these, when customers are targeted directly in scams that are invisible to the bank?

Banks may not be able to detect the fake messages fraudsters send their customers — they don't have that visibility. But they can and do use machine learning techniques in such cases in two ways: One is to identify physical behavior, such as typing or tapping patterns, that deviate from the customer's usual activity. The other is to spot anomalous transactions, including out-of-character wire transfers.

The first approach can help a bank whose customers are being scammed directly by fake text messages, said Dominic Venturo, chief digital officer at U.S. Bank, which is based in Minneapolis. 

"There are solutions today that look at other elements of the data stream so that if somebody turns up using somebody's user ID and password, there are other ways to know that that's not them, that that's potentially a bad actor trying to enter what is an otherwise legitimate user ID and password." 

Many authentication providers, including BioCatch and Socure, can collect a customer's device ID, IP address and behavioral biometric clues such as typing speed and the angle at which a customer typically holds their phone, to raise a red flag when a behavior does not line up with a customer's normal activity. 

The second approach, using AI to monitor wire transfers, also can catch behavior that a human might not be able to spot.

"If somebody never does a wire transfer and all of a sudden they're doing a transfer and it's a large amount, that's something that you might want to flag for review," said Jim Mortensen, strategic advisor at Datos Insights. 

Synchrony Bank in Stamford, Connecticut, has deployed custom and third-party machine learning fraud models to combat sophisticated application fraud, account takeover and transactional fraud schemes.

"Our machine learning models combined with our authentication tools have enabled us to reduce fraud rates over the last few years," according to Kenneth Williams, senior vice president and enterprise fraud leader.

Evolution of AI-based fraud detection

AI is helping banks get better at detecting fraud, according to Venturo.

"In the fraud world, the attack vectors change all the time," he said. "This is across billions of transactions, which generate lots of data. It turns out when you have lots of variables and lots of data, AI and machine learning are useful tools to identify patterns, to test patterns and then to develop solutions. So it's an area that is well suited for those kinds of technologies."

In earlier days, using machine learning for fraud detection was about "looking at a transaction relative to a customer's behavior and saying, does that look risky or not, should I approve it or not?" Venturo said. "As it learns your behavior better, it gets better at that."

The bank and card program provider is starting to use AI in many areas, but it's also trying to build paths forward for employees whose jobs will be affected.

February 11
Margaret Keane, President and CEO of Synchrony Financial.

The next generation is looking at universes of customers that behave similarly and then trying to find the ones that are out of pattern, because the out-of-pattern one might be an anomaly or a fraudster. 

More recent efforts involve "looking at networks of networks of interrelated or possibly interrelated parties," Venturo said. For instance, in trying to identify all the participants in a new money laundering scheme, "any one of those transactions might look like one thing, but through AI and machine learning, you can actually network them together as being possibly related and say, oh, I've got a crime ring here."

At Synchrony, AI also helps investigate customers' complaints of fraud.

"Synchrony has used AI-based models to help fraud investigators reach conditional approvals quicker on transactional fraud cases," Williams said. "This speeds the process of case resolution and provides additional time for more complicated cases."

Use of generative AI

Mortensen is just starting to see banks use generative AI in fraud detection. 

"Banks are dipping their toes in the water more and more with respect to AI tools that help analysts make better decisions," he said. "When you get something flagged as a fraud analyst, there's a lot of things you need to check and you need to explain. Generative AI is a great tool to explain those things."

Generative AI could function as a copilot for fraud analysts to do better work and be more effective on cases, Mortensen said.

Banks are also starting to use generative AI to create rules for machine learning models and to evaluate the performance of their fraud techniques and then make recommendations to modify them. 

But even though many banks do have good tools in place to identify out-of-pattern behaviors that may indicate fraud, "those tools aren't foolproof," Mortensen pointed out. "There are always false positives. Banks try to manage that to avoid additional friction. The key there is [using] better analytic tools to separate the goods from the bads, and be able to focus on the malicious transactions."

For reprint and licensing requests for this article, click here.
Fraud detection Technology
MORE FROM AMERICAN BANKER