- Key insight: Starling Bank launched "Scam Intelligence," a generative AI tool that lets customers upload images and texts to check for signs of common payment scams.
- What's at stake: The move is driven by strict U.K. regulations that require banks to reimburse customers for most authorized push payment fraud, which cost U.K. victims £450 million in 2024.
- Forward look: This contrasts sharply with the U.S., where banks are generally not liable for APP fraud, highlighting a growing global divergence in tackling social engineering scams.
Overview bullets generated by AI with editorial review
Starling Bank this week introduced Scam Intelligence, a generative AI-powered tool designed to help customers identify and avoid authorized push payment, or APP, fraud.
The tool then analyzes the content using Google's Gemini models and provides personalized risk guidance, highlighting potential red flags in the image or text, such as prices that are too good to be true or attempts to rush the customer into action or payment details that do not match that of a legitimate merchant.
Starling's move underscores the accelerating global pressure to mitigate customer losses to social engineering scams, particularly in jurisdictions where liability has shifted dramatically toward financial institutions.
What is APP fraud?
APP fraud is a crime in which customers are tricked into sending money to a fraudster via bank transfer. The customer unwittingly approves the payment, making it distinct from unauthorized fraud, in which access devices are stolen, accounts are taken over or checks are stolen or forged.
An example of APP fraud is a romance scam in which the criminal builds a supposedly romantic relationship with the victim then asks for money on false pretenses, such as a family emergency. In this case, the victim is tricked into authorizing a payment to the criminal.
An example of unauthorized fraud includes check fraud, in which the criminal intercepts and attempts to deposit a check sent by the victim. In this case, the victim did not authorize the payment to the criminal.
In 2024, APP fraud accounted for £450 million (or roughly $590 million) in losses in the U.K., 
Tackling fraud with generative AI
Starling's Scam Intelligence tool aims to reduce fraud by providing customers with a fast, easily accessible automated advisor that can point out warning signs of a scam.
For example, if the user uploads a screenshot of a listing on Facebook Marketplace or eBay, the tool might flag that the price on the item is suspiciously low, that the ad image is not genuine or that the seller refuses to use secure payment features available on the platform.
Scam Intelligence was built using Google's Gemini models, which are the tech giant's response to OpenAI's ChatGPT.
The Starling tool runs on the Google Cloud platform. The tool uses the models to understand the context of the uploaded images and text, and a proprietary system by Starling provides the final risk assessment provided to the customer.
David Hanson, the minister for fraud in the U.K., said he welcomes Starling's tool as "a great example of how AI can be used in the battle against fraud."
While Starling did not detail the technology it uses to assess scam risk in images and text content provided by users, researchers have 
A RAG-based large language model is optimized to reference a knowledge base specified by the model developer before generating a response. In the case of a tool designed to identify scams, the knowledge base might be documents describing the tactics criminals use in various types of scams.
RAG is designed to reduce the risk of a language model hallucinating, which is when the model makes up information.
UK regulatory pressure vs. US liability
The development of Starling's scam advisory tool is heavily incentivized by the unique regulatory environment in the U.K., where since last October, banks have been required to follow a comprehensive reimbursement regime for APP fraud victims.
Under this mandate from the Payment Systems Regulator, or PSR, regulated companies are required to reimburse most victims for APP fraud. Crucially for institutional bottom lines, the cost of reimbursement is split equally (50%) between the sending bank and the receiving bank.
This contrasts sharply with the U.S. landscape, where current regulations under the Electronic Fund Transfer Act, or Regulation E, do not require banks to reimburse consumers for losses due to APP fraud.
U.S. payments obtained through deception are typically considered "authorized" payments, even if they stem from social engineering.
While U.S. regulators have focused on unauthorized fraud (such as account takeover), some banks voluntarily agree to reimburse victims of specific imposter scams, though they are generally not liable for such losses.
A point of comparison: ScamFlag by ThreatMark
Starling's launch of its AI-powered scam detection tool comes a few months after ThreatMark, a fraud prevention company, launched a similar solution that it has been selling to U.S. and U.K. banks.
Like Scam Intelligence, ScamFlag is an AI-powered scam detection tool engineered to protect digital banks and their customers from social engineering attacks. ThreatMark markets ScamFlag as "omnichannel," allowing customers to upload images and screenshots via email, SMS, WhatsApp and other digital channels.
ThreatMark says it trained the generative model on scam samples, which help the model analyze pictures structurally, extract visible text, and check any identified links or bank account numbers against ThreatMark's database.
ThreatMark claims the system detects scams with 99% accuracy.
ThreatMark offers ScamFlag as a software development kit, or SDK, which allows institutions that purchase the service to integrate it with their existing mobile applications.






