BankThink

The banking industry isn't ready to fight AI-enabled deepfakes

BankThink on combating deep fake bank fraud
The banking industry and its regulators need to acknowledge the danger presented by ultrarealistic deepfake technology and implement new layers of transaction authentication, writes Shivani Deodhar, of BNP Paribas, in American Banker's BankThink.
Adobe Stock

In early 2024, a Hong Kong-based multinational was defrauded of $25 million after an employee was tricked into joining a video call with what appeared to be the company's CFO and other colleagues. The twist? Every participant on the call except the victim was an AI-generated deepfake. This wasn't a Hollywood heist movie. It was real.

The financial industry is now entering a new era of fraud, where artificial intelligence is no longer just a tool for efficiency. It's also a weapon. As generative AI tools become more sophisticated and accessible, so too do the fraud tactics used against banks and their customers. Deepfakes —hyperrealistic synthetic media that mimic voices, faces and entire identities — are the latest threat vector in the fraud ecosystem. And if banks aren't aggressively preparing now, they risk being caught flat-footed.

Just two years ago, deepfakes were mostly associated with viral memes or celebrity impersonations. Today, they're being used to forge customer identities, fake compliance videos and impersonate executives for wire fraud. In a sector that relies on trust and verification, this presents a uniquely dangerous scenario. A fraudster no longer needs to breach a system; they just need to convince a bank employee that a fraudulent request is coming from someone familiar.

Unlike phishing emails or social engineering calls, deepfakes appeal to the very senses we trust most: sight and sound. Human brains are wired to believe what they see and hear. When that can no longer be trusted, traditional fraud detection mechanisms and human intuition may not be enough.

Banks have made commendable strides in securing digital infrastructure. Multifactor authentication, biometrics and behavioral analytics have become table stakes. But deepfake fraud attacks target the weakest link: people. When a bank employee sees a trusted executive's face on-screen, hears their familiar voice and receives a plausible request, the tendency is to comply, not question.

Moreover, many internal processes still rely heavily on manual validation, especially in relationship-managed segments like corporate banking and wealth management. These are the very environments where deepfake fraud is most likely to succeed.

Despite the growing risk, regulatory frameworks addressing AI-generated synthetic media remain fragmented and reactive. While the SEC and Fincen have issued general guidance on AI risks and cybersecurity, few specifics address deepfakes directly. This leaves banks in a difficult position on their own to build the defenses.

We've seen a similar lag before. In the early 2010s, the financial sector underestimated the rise of social engineering and business email compromise scams. It wasn't until billions had been lost that industrywide response kicked in. We can't afford to repeat that mistake.

Valerie Abend, Accenture's financial services cybersecurity lead, explains what banks get wrong about fending off AI-based threats and what they should do instead.

April 15
Valerie Abend

The banking industry needs to shift from a reactive to a proactive posture not just in technology but in governance, training and collaboration. High-risk transactions or approvals should require more than just audiovisual confirmation. Banks should adopt multichannel verification such as a second confirmation through a separate medium, or blockchain-secured transaction workflows that can't be spoofed by audiovisual inputs.

Just as anti-fraud teams use machine learning to detect anomalies in financial behavior, they must now also use AI to spot signs of synthetic media. Several AI-driven tools can detect pixel irregularities, voice cloning artifacts or unnatural facial expressions. Staff should be trained to question video calls and audio instructions just as they were taught to scrutinize suspicious emails a decade ago. Verification should trump convenience, particularly for unusual requests or high-value transfers. Fraudsters' tactics evolve rapidly. Banks should work with regulators, telecom providers and cybersecurity firms to share threat patterns and detection models. The faster one institution spots a deepfake, the better prepared the rest of the industry will be.

Ultimately, defending against deepfake fraud is not only about upgrading tools; it's about shifting mindsets. The financial industry has long rewarded efficiency, speed and client responsiveness. But in the age of generative AI, skepticism must become a core competency. A healthy dose of doubt may be the best defense against the most convincing lies ever manufactured.

If a fraudster can impersonate a CEO convincingly enough to authorize a wire transfer, then every transaction and every trust-based process is now in question. The illusion of visual confirmation is no longer sufficient. Banks must adapt to a world where seeing is no longer believing.

The next billion-dollar bank fraud may not be committed with malware. It might come disguised as the boss on Zoom.

For reprint and licensing requests for this article, click here.
Fraud Artificial intelligence Regulation and compliance Cyber security
MORE FROM AMERICAN BANKER