As agentic commerce grows, risks abound

  • Key insights: Agentic commerce, which enables tasks with little or no human supervision, can add efficiency to transactions, but also boosts security risks. 
  • What's at stake: Most organizations report AI-facilitated attacks, according to Darwinium. 
  • Forward look: Fintechs, which are often digital-first, are generally more prepared than banks to stop fraud across the customer journey, according to research. 

With agentic commerce proliferating, banks face new risk management challenges, and the stakes are high for those that fall behind. 
The changing fraud environment—and the dangers to banks—is underscored by a recent report published by Darwinium, an AI fraud prevention company. Darwinium polled 500 fraud, risk, and security leaders across the U.S. and U.K., 40% of them from banks and fintechs.

Processing Content

The upshot: 97% of organizations surveyed have seen AI-facilitated attacks increase in the past year, and 75% estimate that more than 25% of their current fraud attempts are AI-assisted. On average, 40% of attacks are now AI-assisted, and businesses report an average annual loss of $4.5 million from AI-enabled bad actors, according to Darwinium. 

Addressing AI-exacerbated fraud is critical given the growth of agentic commerce. Thirty percent of shoppers said they would be willing to let an AI agent complete a purchase on their behalf, according to a survey of 1,300 consumers late last year by Contentsquare, a digital analytics company. What's more, Morgan Stanley research predicts that agentic shoppers could represent $190 billion to $385 billion in U.S. e-commerce spending by 2030, capturing 10% to 20% of market share. Another estimate by Bain & Co. predicts the U.S. agentic commerce market could reach $300 to $500 billion by 2030, accounting for about 15% to 25% of overall e-commerce. 

With these trends in mind, here's what banks need to know to protect themselves and their customers:

Determine whose knocking at the door

Many banks aren't doing a good job of distinguishing between good bots and bad ones, Michael Rodriguez, chief growth officer at Darwinium, told American Banker. The market is split on how to handle legitimate agentic traffic: 48% of respondents allow it by default, relying on after-the-fact monitoring, while 31% proactively block it unless explicitly allowed, the study found. 

Many organizations block legitimate bots, resulting in an annual $3 million revenue loss, on average, per organization. Sixty-two percent of organizations estimate this cost at $1 million or more. 

Banks skewed higher in their inability to distinguish between good and bad agentic traffic compared with other industries—52% versus an average of about 40%, according to Darwinium. If the majority of banks can't distinguish legitimate agent traffic from malicious agent traffic, they either block access or let it through and potentially absorb higher fraud rates. 

Banks also need to make decisions on how bots can interact on their systems, Rodriguez told American Banker. Should a bank allow agents to search for rates on its website, for example? What about filling out an application online on a customer's behalf? At many banks, these policies are still being debated.

Beware the prevalence of deepfakes

Deepfakes have become commonplace across industries, but even more so in banking and fintech. Fifty-seven percent of banks and 56% fintechs said they had experienced deepfakes "multiple times" in the last 12 months, compared with 33% for e-commerce, according to Darwinium. For banks, written impersonation messaging using AI is most prevalent, while for fintechs, voice cloning is most widespread.

"Banks aren't just a higher-value target; their processes are likely more vulnerable to deepfakes," Rodriguez said. While e-commerce providers mostly need to verify a card, banks need to verify a person or business—a much harder problem with significantly more regulatory pressure, he noted. 

The need to boost anti-fraud efforts

The Darwinium report highlights where banks can improve fraud-detection defenses. More than half of organizations—51 %—said they can stop fraud at a few checkpoints, while 12% can stop it at only one checkpoint.

Historically, organizations have cobbled together different tools to detect fraud. They might, for example, use one tool for login, another for payments, and another for onboarding. Notably, 50% of respondents rely on five to six vendors for their fraud-fighting efforts, and 16% use seven or more. "Each tool creates a handoff, and every handoff is a gap that AI-powered attacks will exploit," the report said.

Fintechs, which are often digital-first, are generally more prepared than banks to stop fraud across the customer journey, whereas banks' efforts tend to be more siloed. This is a problem for banks since they are likely to be more exposed to fraud losses as agentic attack volumes increase. Also, "if fintechs can absorb agentic fraud more effectively while maintaining a frictionless experience, they will likely win more customers," Rodriguez said. 

Liability remains a wildcard

There's significant uncertainty around liability when an agent-driven purchase or account action goes rogue. Determining an appropriate liability framework is a work-in-progress.

Thirty-nine percent of respondents said the AI company should be liable, according to the Darwinium report. Meanwhile, 20% said the customer or user should bear responsibility; 15% said liability should be shared based on the scenario; 14% said the merchant or platform should be liable, and 11% said the bank or payment processor should be liable.


For reprint and licensing requests for this article, click here.
Risk management Security risk Payments Machine learning
MORE FROM AMERICAN BANKER