Banks tighten ID checks as AI deepfakes get better 

Face recognition with facial scan in phone. Identification and verification to unlock smartphone. Deep fake technology. Man using cellphone. AI mobile tech and biometric id authentication. Data access
Adobe Stock

"Your voice is your password." That was the tagline many banks once used to promote biometrics as a secure, seamless way to verify customers. Now, deepfakes have flipped that idea on its head.

What was seen as a simple and foolproof authentication method is being challenged. As synthetic voice and video attacks get more convincing, banks are rethinking the degree to which they can rely on biometric authentication. That concern was raised by OpenAI CEO Sam Altman who, speaking at a Federal Reserve conference last week, said AI has "fully defeated" voiceprint authentication methods. He warned of a coming fraud
crisis in financial services due to AI impersonations.

Analysts and bankers agree: biometrics — including voice and facial recognition — are vulnerable as AI-generated voice and video clones challenge systems. The solution lies in multilayered authentication systems and better detection tools. But it's a constant race against fraudsters, whose tool kits keep getting better.

"We've known for a long time that voice ID is spoofable," said Jim Mortensen, strategic advisor in the fraud and AML practice at Datos Insights. "The fraudsters will match the capabilities that solution providers develop and get better as well. It's going to be a constant push and pull."

The ability to successfully impersonate voice ID has been highlighted by reporters for years. In 2023, Vice and The Wall Street Journal reported stories in which journalists bypassed bank voice biometrics using AI-cloned voices. The threat of financial losses from generative AI–driven fraud is expected to grow significantly in the coming years: Generative AI–enabled fraud losses in the U.S. are projected to grow more than threefold, from $12.3 billion in 2023 to $40 billion by 2027, according to Deloitte's Center for Financial Services. As the technology to develop voice and video deepfakes gets cheaper and easier to use, the barrier to entry for fraudsters is dropping.

Managing the risk of biometric identification systems

Both voice and video-based biometrics can be used for authentication purposes — whether for signing into accounts, identifying customers who contact call centers or onboarding new users to new accounts. Users may be required to match a selfie or complete a video liveness test to verify the ID documents they've submitted or have on file, Mortensen added. 

Bankers acknowledge that AI-based voice and video cloning pose threats, and they're working on solutions that involve stepped-up verification, such as combining multiple forms of ID verification like voice plus PIN, or voice plus a security question — along with the use of liveness and deepfake detection technologies, Mortensen said.  

A multifactor approach using the above modalities — along with behavioral biometrics, in other words, indications of how users type or swipe — are best practices. 

"A layered strategy is so important. Combine voice or video with behavioral and other components — it ups the level of difficulty for the fraudster," he said.

Banks also should be reviewing vendor choices — including their capabilities to update their toolsets to support the evolving threat environment. 

"It's a combination of getting more innovative vendors that might have better capabilities … a lot of these existing vendors have threat analysis functions in their organizations that look at failures of the solution in the past, and then try to understand it so they can provide feedback and patch those failures," he said.

A growing concern among bankers

According to American Banker's 2025 Frictionless Fraud Survey, which polled 125 executives in banking, credit unions and payments in March and April 2025, a third reported experiencing deepfake attacks and adjusting their processes as a result. Another 47% said they are proactively preparing for such threats, while 20% had no formal plan in place to address this type of fraud

Predictably, larger institutions are more likely to have seen or responded to a deepfake incident. Nearly half of national banks (47%) and more than a third of midsize banks have adjusted their processes in response to deepfake or AI-driven social engineering, while 80% say they are proactively preparing for or responding to artificial intelligence-driven social engineering tactics.

Research from Datos Insights supports the finding that bankers are taking deepfake threats more seriously. A survey of 73 financial institutions on application fraud trends conducted in the first quarter of 2025 found that 77% of respondents said deepfakes were either a moderate or high threat to application processes. Meanwhile, 55% said AI or machine learning will play a significant role in application fraud-prevention strategies in the next two years.

For their part, banks say they are taking the threat seriously, and are evolving their defenses accordingly. Stearns Bank said it uses biometric identification at login, layered with multifactor authentication and real-time risk signals like device changes. 

"Despite the increased challenges AI-generated spoofing has created, the guiding principles of verification and authentication have not changed," said Adam Gill, director of digital banking and product at Stearns Bank. 

He emphasized the importance of continuing to layer authentication factors, including "something you know," "something you have" and "something you are," rather than abandoning biometrics altogether. 

"Integrating biometric authentication solutions is not a one-off or siloed digital transformation exercise," Gill said. "It is the continued testing of new solutions, layering existing solutions, refining rulesets and continuously learning from use cases that help banks work smarter."

Meanwhile, at Bankwell, Executive Vice President and Chief Risk Officer Steven Brunner said the bank is "actively evaluating biometric ID solutions that align with our existing infrastructure." 

Beyond authentication, while consumers are generally willing to provide their biometric information to initiate payments, an uptick in deepfake incidents might challenge that trust, suggests Christopher Miller, lead analyst for emerging payments at Javelin Strategy & Research.

"Consumers, broadly speaking, are not irrevocably opposed to signing up for biometric authentication … this positive attitude is one that could conceivably be turned negative," amid a series of news stories about deepfake authentication risks, he said. 

In response to emerging deepfake threats, banks should adopt multimodal authentication and real-time verification, while also exploring information-sharing across institutions, said Tiffani Montez, principal analyst at eMarketer.

"Banks must move beyond one-off fingerprint or facial scans to embrace continuous, multimodal authentication, monitoring behavioral signals like typing cadence and voice and stepping up verification in real time," she said. "By sharing fraud intelligence across institutions and embedding privacy-by-design, they can block synthetic IDs and deepfake attacks while earning lasting customer trust."

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER