PayPal and Visa embrace AI for payment crime fighting

PayPal and Visa signs
David Paul Morris/Bloomberg

PayPal and Visa are using advanced forms of artificial intelligence to identify threats to payments before the point of transaction. 

"We work hard to be smarter and faster than the scammers, especially in a shifting landscape where bad actors frequently change their tactics to try to avoid detection," Chad Gonzales, vice president of consumer and credit fraud risk at PayPal, told American Banker. "It is not just about reacting to new scams, but instead it's about running ahead."

PayPal and Visa are bolstering AI-powered security as the financial services industry responds to AI as a security threat and increasingly uses AI to manage security risks.  

Sixty-five percent of national banks believe AI will have a major impact on fraud detection, beating other impacts such as risk management and compliance at 57%, according to American Banker research. Fifty-one percent of regional and midsize banks say security is where AI will have the largest impact, compared to 52% of community banks and 77% of credit unions. Fifty-nine percent of financial institutions are testing or implementing AI for compliance and risk management, according to American Banker. Payment firms have also embraced AI as a security tool. Mastercard, for example, recently began using generative AI to help locate stolen card data

"We're definitely seeing a growing focus on how AI, particularly large language models and adjacent tools, are shaping both the defense and offense sides of payment security," Gilles Ubaghs, a strategic advisor for commercial banking and payments at Datos Insights, told American Banker. 

PayPal's and Visa's moves

PayPal has launched an AI-powered scam detection tool for payments made through PayPal and Venmo's Friends and Family, a feature that enables payments within a group without fees. The alerts are designed to reach users before funds are sent, attempting to identify fraud or scams that originate in social media. Consumers get information about potential fraud at the point of payment, using AI models that analyze data and update when payment patterns change. The content is tailored to the user based on a likelihood of fraud for each payment.

"While we continue to automatically decline transactions we detect as highly risky, the alerts intercede and help give customers time to pause, think, and review helpful tips before pressing send on a potentially risky payment," Gonzales said. When designing the system, PayPal focused on making the alerts both effective and user-friendly. The payment company matches each alert to the risk level of the payment a customer is trying to make, contrary to traditional warnings which send the same static warning for every transaction, according to PayPal executives.

PayPal uses an internally developed equivalent of generative AI to power its fraud prevention sytem. It also uses its homegrown gen AI to automatically suggest a checkout option within Fastlane, its point of sale system, based on a consumer's prior payment history and the most efficient and inexpensive option. Fastlane is a large part of PayPal's recent strategy to boost branded checkout, using AI to speed processing and to act as a payment facilitator.

Read more about artificial intelligence. Artificial intelligence | American Banker

In a separate release, Visa this week launched the Visa Cybersecurity Advisory Practice, which provides insights that identify and combat potential threats. 

The cybersecurity unit combines Visa Consulting and Analytics, data scientists and product experts who provide training programs for payments best practices, cybersecurity assessments and attack detection. The unit also uses new forms of AI to produce content and security checks. 

"Cybersecurity is no longer seen as a cost center, but as a vital part of any business' growth strategy," said Carl Rutstein, global head of advisory services for Visa, in a release.

How AI is helping create and catch fraud

From a fraud prevention perspective, AI tools are proving especially useful in areas like case management and early stage detection, according to Ubaghs. "We're seeing more institutions leverage machine learning models to triage alerts, flag suspicious activity faster, and assist analysts with summarizing cases for investigation," he said.

But human oversight remains crucial, especially given the risks of false positives. "No one wants to be the payments provider responsible for stopping a legitimate customer from checking out, and that risk only grows for high-volume digital and small businesses," Ubaghs said. 

Conversely, there's  been a sharp uptick in AI-assisted phishing, man-in-the-middle attacks and increasingly sophisticated social engineering, according to Ubaghs.  

"Tools that once gave banks and fintechs an edge are now being used against them," Ubaghs said. "The arms race is real." As the public becomes more aware of AI-driven fraud, that will place more pressure on payment companies to not only make improvements, but to be "visible" about it, Ubaghs said. 

"Quiet security isn't enough anymore," he said. "The financial and operational impact on small businesses can be severe. As a result small- to medium-size businesses want to use more security up front, even if some of it is just theater."

Even as payment companies and banks are investing in AI to counter fraud and scams, use of AI for payment security is still at an early stage. 

While there has been an increase in unsupervised machine learning to detect anomalies in digital payments, the adoption of large language models in the major fraud platforms is still mostly on the road map, John Meyer, a managing director at Cornerstone Advisors, told American Banker, noting that some of the bigger transaction monitoring vendors are using generative AI and large language models to help write alerts and case summaries, with a very early use being the authoring of suspicious activity reports.  

"This summarization is significant as some professionals estimate that it takes a senior anti-money-laundering or fraud team member an hour or so to compile a sufficient suspicious activity report narrative," Meyer said. "LLMs can take that tedious task and reduce the effort to 15 minutes or so."

For reprint and licensing requests for this article, click here.
Payments PayPal Visa Artificial intelligence Security risk
MORE FROM AMERICAN BANKER