BankThink

AI-enabled bank fraud is a problem right now, and we are not ready

BankThink on combating deep fake bank fraud
The financial services industry is relying on outdated methods of detecting and fighting fraud. With the assistance of artificial intelligence, criminals are penetrating vulnerable systems. It's time for collective action, writes Shlomit Wagman, of the Harvard Kennedy School.
Adobe Stock

AI fraud has arrived, and our defenses are not yet ready. What makes this moment different is not just the sophistication of attacks but their scale. Generative AI can churn out thousands of convincing scams in seconds — emails, voices, even video calls — flooding systems designed for a slower, simpler era of crime.

Anthropic's recent Threat Intelligence Report confirms it: AI-generated phishing, malware and social-engineering attacks are already being deployed. Sam Altman, CEO of OpenAI, warned that it is "crazy" that banks still use voice verification, given AI's ability to mimic speech with uncanny precision.

He is right. For two decades I have worked at the intersection of finance, crime and technology — being a senior executive at a global task force on money laundering, leading a national financial crime agency, serving as a fintech executive and now as a senior fellow at Harvard. From that vantage point, I can say: The fraud crisis has arrived, and we are unprepared.

The assumptions that once underpinned financial security have collapsed. "My voice is my password," previously promoted by regulators, is obsolete. Deepfakes, forged documents and hyper-personalized phishing campaigns are proliferating.

One venture capital executive told me how, within hours of her firm announcing an investment, employees received a convincing fake message, in a voice indistinguishable from her own, ordering them to reroute funds. In other cases, staff have been deceived by deepfake video calls from supposed executives. Families have been manipulated by AI-generated pleas from relatives abroad.

What once looked like a crude "Nigerian prince" scam is now a personalized assault. Tools like "FraudGPT" are openly sold on the dark web, enabling anyone to spoof websites, mimic identities and penetrate accounts in real time. Even "liveness tests" meant to confirm that a user is real are increasingly easy to fool.

This is not just a financial problem. AI fraud is also a weapon. The same tools can impersonate political leaders, disrupt elections or trigger panic in a terror attack. Automated strikes on grids, hospitals or banks turn crime into a national security threat. When trust in digital identity collapses, the damage spreads from markets to democracy itself.

The systems we rely on are relics. Passwords, voice recognition and rule-based monitoring were designed for an era when attackers had limited reach. They are collapsing against adaptive AI systems that learn, mimic and scale at will. The gap between offense and defense is widening, with trillions of dollars — and public trust — at stake.

We are not powerless. Just as society contained the computer virus epidemics of the 1990s and spam in the 2000s, we can contain AI-driven fraud if we act quickly and ambitiously. Five priorities stand out.

Zions Bancorp. is among the latest banks to report material losses due to alleged borrower fraud. Stocks of regional lenders plunged on Thursday.

October 16
Wall Street

First, we must modernize our defenses. Vulnerable companies — for example banks — must replace rule-based systems with AI-native monitoring that can process massive data flows and detect evolving patterns. And they must protect customers ensnared by convincing scams, with the authority to stop a transaction even if it appears to be authorized.

Next, we need to fight AI with AI. Criminals already use AI at a speed no human can match. Comparable investment must go into "Defense AI": tools that detect deepfakes, synthetic identities and fraud patterns in real time. Today, such research is fragmented and underfunded. Governments, AI labs, fintech firms and academia must unite.

The next step is to rebuild what we consider digital identity. Voice, photos and video can no longer serve as reliable proof of identity. New models must combine biometric, behavioral and contextual factors that machines cannot easily fake. This is both a regulatory necessity and an entrepreneurial opportunity.

Governments need to coordinate their responses. Many states are enacting deepfake laws. But the challenge is global, not local. Without coordination, the ecosystem remains vulnerable. The AI industry must adopt interoperable standards for labeling synthetic content, rather than today's patchwork of company-specific detectors. A model already exists: the Financial Action Task Force, which I helped lead and created binding international standards to safeguard the financial system. We need a similar framework for AI misuse — one that balances innovation with the protection of markets, democracy and national security. Real-time information sharing is also critical. Banks, regulators and technology companies must be able to exchange indicators of emerging fraud using privacy-preserving technologies to protect consumers.

Finally, public education must also be part of the response. Citizens need to understand that not everything they see or hear is authentic. Schools and media campaigns should instill habits of skepticism, verification and vigilance.

This is a technological arms race. On one side: criminals, automated systems and state actors. On the other: outdated defenses resting on trust we can no longer afford.

Adversaries no longer need armies. Small groups with generative AI can cause levels of damage once reserved for states. Unless the gap is closed quickly, the cost will be measured not only in trillions of dollars but in weakened democracies, loss of public trust and manipulated elections.

AI companies, entrepreneurs, the financial sector, academia and policymakers must act now. Delay is an open door we cannot afford to leave unguarded.

For reprint and licensing requests for this article, click here.
Artificial intelligence Bank Fraud Consumer banking
MORE FROM AMERICAN BANKER