$300 AI tool kits let criminals bypass bank security

Telegram Messaging App As CEO Charged in France
Betty Laura Zapata/Bloomberg

Processing Content
  • Key insight: Fraudsters are actively using software such as "ProKYC" and virtual cameras to feed deepfake videos into live verification portals, bypassing biometric checks in minutes.
  • What's at stake: Banks risk failing to comply with federal know-your-customer regulations and allowing money launderers and fraudsters onto their platforms.
  • Expert quote: "We can't ignore the AI threat," said Eric Huber, TD Bank's head of adversarial intelligence. "It's not hype. It's real."

Overview bullets generated by AI with editorial review

For less than $300, a criminal can buy everything needed to defeat a bank's identity verification: a Social Security number, a background report, a printer that can print onto the same synthetic material used to make real passports and an AI face generator.

That's enough to open an account at a bank with a real name and a fake face in under five minutes, defeating whatever liveness checks or document verification tools the bank relies on today.

TD Bank cybersecurity expert Eric Huber presented a live demonstration of some of these exploits during a Monday presentation at the 2026 RSAC Conference, an annual gathering of security professionals.

Huber's central message was blunt: The tools to defeat bank identity verification are already cheap, widely available and actively in use.

Huber, TD's head of adversarial intelligence and disruption, detailed how a thriving criminal marketplace operates largely in the open on the messaging app Telegram.

The market offers counterfeit documents, stolen personal data, AI generation tools and instructional courses teaching buyers exactly how to bypass banks' know-your-customer checks.

These highly commoditized, sophisticated counterfeits leave banks needing to adapt their defenses to keep money launderers, fraudsters and other criminals off their platform, in compliance with federal know-your-customer regulations.

The commoditization of fake identities and PII

Fraudsters are manufacturing convincing physical documents using the same materials and methods as legitimate government agencies, according to Huber.

For example, bad actors use Teslin substrate, the synthetic material used to print authentic passports and IDs, to create forged documents.

Counterfeiters also boast on Telegram about using readily available hardware, such as the Epson H6000III printer, to mass-produce fake identification cards that perfectly replicate original security layers, according to screenshots of criminal forums Huber presented.

As of time of publication, these specific printers are available on Amazon for roughly $136.49.

To fill out these counterfeit IDs, criminals easily purchase stolen personally identifiable information on Telegram. Bad actors sell Social Security numbers for as little as $20, while $100 buys a full background report.

The criminal underground has also evolved into an educational hub where bad actors sell online classes on how to bypass security measures.

Sellers offer comprehensive tutorials teaching buyers "how to make DL/ID passing KYC verifications," how to forge bank statements and utility bills, and how to create scannable barcodes from scratch, according to advertisements on Telegram Huber showed during the presentation.

This crime-as-a-service model allows even novice criminals to easily acquire the skills and templates necessary to commit sophisticated identity fraud against financial institutions.

Bypassing biometrics: The "ProKYC" threat

Huber demonstrated how a widely available software program, often called "ProKYC," directly targets the digital onboarding and "liveness" checks of financial institutions and cryptocurrency exchanges. The software executes a three-step bypass, according to Huber's presentation.

First, it generates a counterfeit document using stolen personal information and an AI-generated face.

Next, it creates a deepfake video that maps the AI-generated face onto the movements expected by the verification system.

Finally, the software uses a virtual camera to feed this manipulated video directly into the institution's live verification portal, tricking the system into verifying a fake identity.

During his presentation, Huber showed a video of the software defeating an exchange's liveness check in just five minutes.

Bad actors sell access to this tool on Telegram, and developers constantly update the software to evade new security measures, according to Huber.

The criminal underground even provides massive databases of user-submitted verification photos and videos to help fraudsters practice and perfect their bypass techniques.

The Financial Crimes Enforcement Network, or Fincen, recently issued a warning to financial institutions about this exact threat. Generative AI tools "have greatly reduced the resources required to produce high-quality synthetic content," according to a November alert from the agency.

The agency warned that criminals use third-party webcam plugins to display previously generated deepfake videos during live verification checks, completely circumventing the security measure.

While some in the banking and cybersecurity industries recently panicked over the advanced video capabilities of new AI generators like KlingAI, Huber said that criminals have already been using older, established technologies to successfully defeat biometric security.

An actionable road map for banks

Despite the alarming technological advancements showcased during the session, Huber told the audience of security professionals, "don't panic."

Instead, Huber laid out a specific road map for financial institutions to harden their defenses against AI-enabled identity fraud.

Within one week, banks should "designate an AI point person within your organization," he recommended. Huber noted that this should be someone who inherently understands the technology.

Within three months, institutions must map their attack surface to determine exactly where and how threat actors could deploy these deepfake tools against the company, its employees and its customers, according to Huber.

By the six-month mark, banks should update their risk frameworks (in many cases, this is the bank's implementation of the NIST Cybersecurity Framework) and execute the necessary operational and technological changes identified during their earlier assessments, according to Huber.

Huber's overarching message for the banking industry is that the threat of artificial intelligence compromising onboarding processes is already here. Because the criminal underground continues to adapt and innovate its evasion tactics, U.S. banks must evolve their defenses at the exact same pace, he said.

"We can't ignore the AI threat," according to Huber. "It's not hype. It's real.

"With the right people, processes, and tools, we can protect our organizations and our customers."


For reprint and licensing requests for this article, click here.
Fraud Financial crimes Fraud prevention Bank technology Cyber security Money laundering
MORE FROM AMERICAN BANKER