BankThink

Banks should enlist AI to help customers recognize fraud in real time

BankThink on taking a page from health sciences use of AI
Banks should take a page from the health sciences' playbook and use artificial intelligence to "nudge" consumers away from transactions that have the characteristics of known fraud schemes, writes Michal Tresner, of ThreatMark.
Adobe Stock

Banks and fraudsters are locked in an escalating technological arms race, with both sides deploying increasingly sophisticated AI and machine learning capabilities. Yet despite massive investments in detection systems, fraud losses continue climbing because the industry fundamentally misunderstands the battlefield — today's most devastating attacks target human psychology, not system vulnerabilities.

The most dangerous moment in fraud isn't when hackers breach systems — it's when victims genuinely believe they're making smart decisions, that are in actuality anything but. People actively being scammed rarely recognize their vulnerability and resist warnings that contradict their perceived reality.

Another industry faced with the same type of problem — people acting counter to their own benefit — is health care, and a burgeoning field of study provides an interesting analogue from which banks might learn how to better fight fraud with new technologies.

Over the past decade, medical researchers have pioneered an evidence-based discipline around AI-driven behavior change interventions, or AIDBCI. These successfully modify human behavior at scale by bypassing psychological resistance mechanisms that make conventional warnings ineffective. This approach has been traditionally deployed for helping people stop smoking, make healthier eating decisions or exercise more, but I believe it may hold the key to disrupting fraud by helping banking customers make better decisions.

The results speak for themselves. Recent meta-analysis shows that AIDBCI significantly reduced symptoms of depression by 64% and distress by 70% compared to control groups. In lifestyle interventions, studies demonstrate that AI-powered coaching produces consistent positive trends in physical activity levels.

This field of AIDBCI has determined that lasting behavior change requires more than information — it demands personalized, contextual intervention at the moment of decision. Systems underpinned by AI learn individual patterns and timing, enabling a diabetes app to deliver blood sugar reminders when patients are most receptive, rather than simply sending generic alerts.

The breakthrough lies in removing human judgment (e.g., by doctors) from intervention moments. When AI delivers objective, data-driven recommendations, it eliminates interpersonal dynamics that enable people to rationalize ignoring advice. The evidence has shown that patients respond more positively to computational assessments than human warnings, creating space for behavioral change.

Modern fraud exploits psychological manipulation rather than technical vulnerabilities. The same principles revolutionizing health care can help build safer banking practices by analyzing digital behavior signals indicating manipulation in progress — such as navigation hesitation, typing disruptions and device positioning suggesting active calls.

Rather than generic alerts, AI could deliver personalized education that explains why the content or correspondence is suspicious. Users might see highlighted manipulation techniques, visual cues identifying fraudulent elements and comparisons with legitimate communications. By understanding targeting tactics — deliberate misspellings, urgency language, spoofed logos — users can develop pattern recognition skills that extend beyond the immediate threat.

A banking union calls the reversal by Commonwealth Bank "a massive win," but warns the fight to protect human jobs from AI replacement isn't over.

August 25
Commonwealth Bank of Australia Branches Ahead of Earnings

Research shows that well-timed nudges, rather than intrusive interruptions, can redirect harmful decision-making. AI-assisted nudging takes this further, employing artificial intelligence to dynamically adapt interventions for greater impact.

This mirrors health care's success with personalized behavior change interventions and the application of behavioral economics, both directly applicable to fraud prevention. Health care AI succeeds because it recognizes individual behavioral patterns and timely, tailored messaging. Through hyper-personalization, AI can significantly improve decision-making behaviors by considering personal history, current context and environmental factors.

Behavioral biometrics research shows that AI can detect deviations from normal patterns, suggesting external influence or coercion. Just as health care systems deploy AI to monitor patients' vital signs for concerning changes, banks can tailor fraud education based on customer sophistication, risk tolerance and communication preferences. This allows messages to be framed in more personally meaningful ways: "This request matches patterns we've seen in investment scams targeting customers like yourself. Here's what to verify before proceeding."

The key insight from health care is that effective intervention depends not only on identifying threats, but on understanding how individuals process them. AI systems can learn cognitive patterns — such as typical responses to urgency, trusted authority types and resonant social cues — then shape interventions accordingly.

Health care demonstrates that proactive intervention costs far less than reactive treatment. Similarly, customer empowerment systems require lower investment than fraud recovery, regulatory penalties and damaged relationships. Studies show that digital behavior change interventions can achieve sustained improvements in decision-making while reducing the need for restrictive security measures that create customer friction.

Drawing from health care's proven success with AI-driven behavior change, banks need two critical components to make this approach work in fraud prevention.

Banks already employ behavioral scientists extensively in marketing and sales divisions to influence customer decisions. This same expertise must be integrated into fraud prevention teams. Health care's success stems from combining technological capability with deep understanding of human psychology — banks need behavioral scientists working alongside fraud analysts to design interventions that account for cognitive biases, emotional states and decision-making patterns that fraudsters exploit.

Health care's behavior change model succeeds because it treats patients as partners in their own well-being rather than passive recipients of medical intervention. Banks must similarly shift from viewing customers as potential fraud victims to empowering them as informed decision-makers. This requires organizational commitment to transparency, education and collaborative fraud prevention rather than purely protective measures.

When banking customers develop a genuine understanding of how they're being targeted, they become partners in their own protection rather than passive recipients of security measures. This fundamental shift — from protecting transactions to protecting people — represents the future of financial security.

For reprint and licensing requests for this article, click here.
Artificial intelligence Fraud Truist Financial
MORE FROM AMERICAN BANKER