BankThink

Banks need to leverage big data to combat a surge in AI-enabled fraud

BankThink on using AI for fraud detection
It is time for the financial services industry to take a united stand against fraud, taking advantage of AI and large language models to track it, writes Prathamesh Khedekar.
Miha Creative - stock.adobe.com

The financial services sector stands at a critical juncture, grappling with an intensifying wave of sophisticated fraudulent transactions that resulted in $2 billion in losses in 2022 alone. The Treasury Department's Financial Crimes Enforcement Network issued a warning last month, noting check fraud incidents have doubled in the past two years from 350,000 to 680,000. The impact goes well beyond credit cards and checks, with popular fintech platforms such as Zelle — a service widely used by consumers today to conduct online financial transactions, also bearing the brunt. Given the metastasis of this issue across different levels of the finance sector, no single organization can single-handedly address this problem effectively. It is imperative now more than ever for banks, regulatory agencies and technology partners to join forces to combat financial fraud.

The United States is not the only country witnessing a surge in fraudulent transactions. India's finance sector reported over 14,000 major fraudulent transactions in 2023, with the Reserve Bank of India documenting frauds totaling approximately $3.62 billion. In the U.K., there is growing concern over "deep-fake" frauds, where scammers, using cutting-edge software, can impersonate potential romantic partners or family members in crisis by mimicking their voice and appearance with near perfect accuracy. These sophisticated scams resulted in Britain's financial sector losing $754 million in just the first half of 2023 alone.

Last month, Equifax Canada also raised the alarm on a notable increase in identity fraud, underscoring a global trend toward more sophisticated forms of fraudulent financial transactions. These incidents signal a clear need for a more robust defense mechanism in the finance sector. This is where technologies like Artificial Intelligence and large language models can help us transform our approach to fraud detection.

At their core, the traditional AI models commonly referred to as AI systems trained on historical transaction data such as wire transfer logs, credit card transaction volumes and debit card transactions, can spot fraudulent activities with remarkable accuracy. But they cannot comprehend the insights that might be available in the customer communication channels — emails, phone calls and text messages. Financial institutions can now overcome this limitation by combining these traditional AI models with specialized LLMs. 

LLMs are advanced AI systems designed to understand natural language, a crucial element in most bank-customer interactions. These models significantly enhance the bank's ability to sift through data across various customer communication mediums, including audio and video calls, text messages, emails and social media, looking for signs of fraudulent activity. By integrating LLMs with traditional AI models — which track financial transactions — banks can create a hybrid solution to detect fraudulent transactions with heightened precision and accuracy. This hybrid approach represents a paradigm shift in fraud detection, offering banks the ability to process and analyze data at a scale and speed unattainable by human analysts.

Major banks and financial institutions, including JPMorgan and MasterCard, have begun to employ a sophisticated hybrid AI approach for detecting fraud. This method represents a significant advancement in fraud detection technology. However, integrating these hybrid models into the banking sector presents several challenges.

Data is the lifeblood of these systems, and managing large volumes of data in compliance with financial regulations and privacy laws requires specialized knowledge and skills. Additionally, the infrastructure — both hardware and software — needed to support these models necessitates substantial investments in both human and financial resources, which might be beyond the reach of some banks.

Criminals who buy and sell consumer data on the dark web are perpetrating increasingly complex credit and debit card fraud schemes, according to the card network's latest threats report.

March 21
A Visa credit card is arranged for a photograph in Tiskilwa, Illinois, on Sept. 18, 2018.

Another concern is the potential for false positives, where legitimate transactions may be mistakenly flagged as fraudulent. This could have dire consequences, ranging from customers losing access to their funds to innocent account holders being wrongly accused of fraud.

While these are all legitimate concerns, the reliability and robustness of these models hinges on the quality and diversity of the data used for training, as well as the time dedicated to refining these models. By leveraging well-curated and diverse datasets for training, banks can greatly reduce the risks associated with these models, enhancing their reliability and effectiveness in combating fraud.

Similarly, while not all banks may have the human and capital resources needed to independently build such sophisticated systems, a collaborative model presents a viable solution. By adopting this model, banks can pool resources into a centralized system, sharing anonymized data from transactions and customer interactions. This strategy spreads the financial and resource burden across the sector, encouraging a unified effort toward innovation and risk management.

A centralized and advanced AI system created through this collaboration would enable all participating banks to identify fraudulent transactions with unmatched precision and speed. The success of such an approach hinges on multiple factors including but not limited to the level of collaboration, regulatory support and a shared commitment to safeguarding customer interests.

Collaboration among banks, technology providers and regulatory bodies is essential for sharing best practices, ensuring consumer privacy and navigating the ethical considerations of AI deployment. Regulatory support, in particular, is vital. Clear, supportive guidelines from regulators can help foster an environment of innovation, facilitating the responsible use of AI and LLMs while keeping the industry a step ahead of fraudsters.

It is time for the financial services industry to take a united stand against fraud. While our GDP, today at $27.9 trillion, makes it look like our financial defenses are secured, we could very well be one major fraudulent transaction away from eroding the consumer trust and confidence which form the bedrock of our financial ecosystem. 

For reprint and licensing requests for this article, click here.
Artificial intelligence Bank technology Fraud detection
MORE FROM AMERICAN BANKER