BankThink

Regulators are holding back secure digital identity verification

Face recognition with facial scan in phone. Identification and verification to unlock smartphone. Deep fake technology. Man using cellphone. AI mobile tech and biometric id authentication. Data access
Existing law already provides the tools that would allow an across-the-board upgrade in digital identity verification, with benefits to banks and consumers alike. Regulators are the roadblock, writes Will Wilkinson, of Persona.
Adobe Stock

American banks are drowning in data but still losing the fight against fraud. They spend billions on identity verification procedures that cost $15 to $30 per customer, multiplied across tens of millions of account openings, yet fraud losses continue to climb to record highs. Synthetic identities slip through. Account takeovers multiply. AI-generated deepfakes now outsmart document checks that haven't meaningfully changed since the Patriot Act.

Meanwhile, consumers are caught in the middle. Snap a photo of your plastic card. Retype your own name and date of birth. Answer "secret" questions whose answers have long since been stolen and resold. Then do it again at the next bank, and again at the one after that. The result isn't stronger security. It's slower onboarding, higher abandonment and sprawling warehouses of personal data that become liabilities the moment breached.

The paradox is that a system meant to verify identity increasingly undermines trust. More data is collected than ever, yet fraud continues to rise. The tools to fix this already exist — technologies that can strengthen verification, reduce data exposure and streamline compliance. What's missing is regulatory clarity — so banks can adopt stronger controls without fear of examiner whiplash, and examiners can evaluate modern defenses with a consistent playbook. Three practical regulatory clarifications, achievable under existing law, could move the industry toward a stronger and more privacy-preserving model of identity verification.

First, a mobile driver's license, or mDL, should count as valid, even preferred, under Customer Identification Program, or CIP, rules. A few institutions are piloting them, but most are waiting for clear regulatory guidance, which could be provided through straightforward supervisory clarification.

An mDL isn't a JPEG of a plastic card. It's an issuer-attested, cryptographically signed credential with real-time revocation and selective disclosure — all of which protect both privacy and security. It allows banks and fintechs to confirm key identity attributes — Does the name match? Is the credential valid? Is the customer over 21? — without collecting unnecessary personal details. That means reducing the amount of sensitive data retained, dramatically narrowing breach exposure.

Twenty-two states have already launched or piloted mDLs, and consumers are beginning to present them. Yet many financial institutions hesitate to accept them because existing guidance hasn't caught up.

For regulators, the checklist is straightforward: The financial institution should document which products accept mDLs, enable selective disclosure to "on" by default, implement revocation checks and keep fallback paths for customers without one. Those steps would make verification more secure and less intrusive — stronger than a plastic-card snapshot, easier to audit and far less risky to store. It's essentially a streamlined "trusted traveler" model for banking: more assurance, less friction and far less exposed data. Ultimately, regulators can accelerate progress by publicly affirming the availability, benefits and regulatory acceptance of mDLs.

Next, financial regulators should encourage use of reusable credentials within the financial system. The principle isn't new. Examiners already review third-party know-your-customer reliance. A reusable credential just makes the arrangement explicit, with better controls and audit trails.

Michael Barr said he believes artificial intelligence will have a positive long-term impact on the economy, though it may cause job losses in the short term.

November 6
Michael Barr

In this model, a customer verifies their identity once with a trusted provider and then reuses that credential elsewhere. Alongside basic attributes, these credentials can carry live signals: device reputation, liveness checks, behavioral biometrics, revocation status. These additional signals beyond what's contained in a driver's license or passport expose synthetic identities and coordinated fraud earlier than institution-by-institution document uploads.

The economics are compelling. Automated reusable flows cost a fraction of manual reviews. Abandonment rates drop when customers don't have to keep proving who they are from scratch. If even a fifth of U.S. account openings used reusable credentials, industry savings  could reach hundreds of millions of dollars while improving fraud prevention.

For regulators, the oversight mirrors existing third-party frameworks. Regulators can advance adoption with clear expectations for banks, including that they confirm that providers meet recognized standards such as the National Institute of Standards and Technology, or NIST, 800-63 or equivalent; verify revocation workflows; test controls regularly; ensure audit rights and incident playbooks; and plan for failovers. Regulators can promote use of reusable credentials through guidance that covers these practical safeguards.

Finally, regulators need to encourage financial institutions and their vendors to adopt AI technologies to combat bad actors using AI technologies. Fraudsters are already weaponizing AI to fabricate faces, voices and documents. But the same technology can strengthen defenses when deployed responsibly. AI-enabled fraud defenses detect anomalies and behavioral patterns that humans can't easily see — flagging deepfake selfies, identifying account takeovers in real time or connecting related synthetic identities across multiple applications. When governed properly, these systems reduce false positives that frustrate legitimate customers while catching sophisticated fraud that static checks miss.

In order to support banks seeking to adopt new tools, examiners can rely on existing frameworks. Model risk management principles already are capable of integrating AI technologies, including by requiring a bank to maintain an inventory of AI models, track changes, set review thresholds for edge cases, test for drift, measure fairness and apply vendor oversight proportional to the tool's materiality and characteristic. Regulators can reaffirm that these frameworks apply equally to AI-driven fraud-prevention tools.

Ultimately, privacy advocates rightly want fewer Social Security numbers and other personal information circulating. Community banks want lower costs and shared access to enterprise-grade defenses. Regulators want fewer breaches. Consumers want to stop uploading their driver's license ten different times.

None of these objectives conflict — and none require new laws or lengthy rulemakings. The technology exists to make identity verification stronger, cheaper and far less invasive of privacy. What's needed now is regulatory clarity — that innovation, managed well, strengthens the financial system instead of threatening it. Regulators need to say plainly that mDLs, reusable IDs, and well-governed AI fit within existing CIP, know-your-customer, and safety-and-soundness frameworks. Doing so would modernize identity verification in ways that strengthen both privacy and security in an increasingly digital world. In a system awash in data but short on trust, a little clarity may be the most practical upgrade of all.

For reprint and licensing requests for this article, click here.
Fraud prevention Consumer banking Regulation and compliance
MORE FROM AMERICAN BANKER