- Key insight: Starling Bank launched "Scam Intelligence," a generative AI tool that lets customers upload images and texts to check for signs of common payment scams.
- What's at stake: The move is driven by strict U.K. regulations that require banks to reimburse customers for most authorized push payment fraud, which cost U.K. victims £450 million in 2024.
- Forward look: This contrasts sharply with the U.S., where banks are generally not liable for APP fraud, highlighting a growing global divergence in tackling social engineering scams.
Overview bullets generated by AI with editorial review
Starling Bank this week introduced Scam Intelligence, a generative AI-powered tool designed to help customers identify and avoid authorized push payment, or APP, fraud.
The tool then analyzes the content using Google's Gemini models and provides personalized risk guidance, highlighting potential red flags in the image or text, such as prices that are too good to be true or attempts to rush the customer into action or payment details that do not match that of a legitimate merchant.
Starling's move underscores the accelerating global pressure to mitigate customer losses to social engineering scams, particularly in jurisdictions where liability has shifted dramatically toward financial institutions.
What is APP fraud?
APP fraud is a crime in which customers are tricked into sending money to a fraudster via bank transfer. The customer unwittingly approves the payment, making it distinct from unauthorized fraud, in which access devices are stolen, accounts are taken over or checks are stolen or forged.
An example of APP fraud is a romance scam in which the criminal builds a supposedly romantic relationship with the victim then asks for money on false pretenses, such as a family emergency. In this case, the victim is tricked into authorizing a payment to the criminal.
An example of unauthorized fraud includes check fraud, in which the criminal intercepts and attempts to deposit a check sent by the victim. In this case, the victim did not authorize the payment to the criminal.
In 2024, APP fraud accounted for £450 million (or roughly $590 million) in losses in the U.K.,
Tackling fraud with generative AI
Starling's Scam Intelligence tool aims to reduce fraud by providing customers with a fast, easily accessible automated advisor that can point out warning signs of a scam.
For example, if the user uploads a screenshot of a listing on Facebook Marketplace or eBay, the tool might flag that the price on the item is suspiciously low, that the ad image is not genuine or that the seller refuses to use secure payment features available on the platform.
Scam Intelligence was built using Google's Gemini models, which are the tech giant's response to OpenAI's ChatGPT.
The Starling tool runs on the Google Cloud platform. The tool uses the models to understand the context of the uploaded images and text, and a proprietary system by Starling provides the final risk assessment provided to the customer.
David Hanson, the minister for fraud in the U.K., said he welcomes Starling's tool as "a great example of how AI can be used in the battle against fraud."
While Starling did not detail the technology it uses to assess scam risk in images and text content provided by users, researchers have
A RAG-based large language model is optimized to reference a knowledge base specified by the model developer before generating a response. In the case of a tool designed to identify scams, the knowledge base might be documents describing the tactics criminals use in various types of scams.
RAG is designed to reduce the risk of a language model hallucinating, which is when the model makes up information.
UK regulatory pressure vs. US liability
The development of Starling's scam advisory tool is heavily incentivized by the unique regulatory environment in the U.K., where since last October, banks have been required to follow a comprehensive reimbursement regime for APP fraud victims.
Under this mandate from the Payment Systems Regulator, regulated companies are required to reimburse most victims for APP fraud. Crucially for institutional bottom lines, the cost of reimbursement is split equally (50%) between the sending bank and the receiving bank.
As one expert put it, the U.K. requires its banks to "underwrite criminal activity" that originates on platforms they cannot influence, such as telecommunications networks and social media. That's according to Trace Fooshée, strategic advisor in the fraud & AML practice at consultancy Datos Insights.
This contrasts sharply with the U.S. landscape, where current regulations under the Electronic Fund Transfer Act, or Regulation E, do not require banks to reimburse consumers for losses due to APP fraud.
U.S. payments obtained through deception are typically considered "authorized" payments, even if they stem from social engineering.
While U.S. regulators have focused on unauthorized fraud (such as account takeover), some banks voluntarily agree to reimburse victims of specific imposter scams, though they are generally not liable for such losses.
The business case for 'upstream' scam detection in the U.S.
While the Starling Bank Scam Intelligence tool was launched in the U.K.'s unique regulatory environment, experts suggest a compelling business case exists for banks to develop similar technology in the U.S.
The main strategic advantage for banks in deploying such a tool is the ability to "capture and accumulate risk signals that are often far 'upstream' from a fraudulent payment event," according to Fooshée.
Fooshée explained that the vast majority of scammers' deception efforts occur prior to — and outside of — the bank's digital environment. This lack of visibility prevents banks from collecting and analyzing the necessary data that might indicate a customer is being targeted.
Tools such as Scam Intelligence "are a solid step in the direction of enabling the collection of risk signals that can be fed into fraud detection systems that are designed to predict whether any given payment order is likely to be the result of fraudulent activity," he said.
There would be "no downside" for a U.S. bank to develop a similar scam advisory tool, according to Tracy Goldberg, director of cybersecurity at Javelin, categorizing such an initiative as "security empowerment" — a service to help build customer loyalty.
However, building such a tool and launching it would come with some costs. Building and deploying sophisticated tools such as a scam intelligence platform can be "very expensive" and "difficult to engineer and deploy," according to Fooshée.
For these tools to function effectively, the consumer must also consent to share certain information with the bank and be willing to play an active role in using the tool, Fooshée added. He suggested consumers might resist using the tools consistently if they require anything more than completely passive monitoring capabilities.
Lastly, Fooshée and Goldberg noted that similar capabilities are already offered by third-party solutions, including Scamnetic and Scam Ranger, which offer image and text analysis services comparable to Starling Bank's new tool. Some banks might take this as a reason to buy rather than build; others looking to build such a tool in-house might see these alternatives as competition.
Javelin conducts regular reviews of identity protection services from companies such as Equifax, Allstate and Transunion. These product suites often include credit monitoring, dark web monitoring, insurance against identity theft events and similar services.
Now, many of these platforms are also starting to add scam detection and prevention, according to Goldberg.
A point of comparison: ScamFlag by ThreatMark
Starling's launch of its AI-powered scam detection tool comes a few months after ThreatMark, a fraud prevention company, launched a similar solution that it has been selling to U.S. and U.K. banks.
Like Scam Intelligence, ScamFlag is an AI-powered scam detection tool engineered to protect digital banks and their customers from social engineering attacks. ThreatMark markets ScamFlag as "omnichannel," allowing customers to upload images and screenshots via email, SMS, WhatsApp and other digital channels.
ThreatMark says it trained the generative model on scam samples, which help the model analyze pictures structurally, extract visible text, and check any identified links or bank account numbers against ThreatMark's database.
ThreatMark claims the system detects scams with 99% accuracy.
ThreatMark offers ScamFlag as a software development kit, or SDK, which allows institutions that purchase the service to integrate it with their existing mobile applications.






