How one bank uses generative AI to fight domestic abuse

Commonwealth Bank of Australia signs
Commonwealth Bank is offering its financial fraud-fighting AI to other banks.
Brent Lewin/Bloomberg

Commonwealth Bank of Australia made an alarming discovery in 2020 — thousands of transactions were being sent with abusive language attached. And the bank wanted no part of this aggressive new form of cyberbullying.

"Some customers were being sent a large number of low-value transactions that contained abusive language, words, phrases or threats in the description field of those payments, essentially the payment as a messaging service," said Caroline Wall, head of customer vulnerability at Commonwealth Bank. 

These messages are often a precursor to financial abuse, a problem that plagues banks and credit unions globally and costs billions in losses per year. CBA has found success by using generative artificial intelligence to fight this trend, and in late November said it would share the technology to banks and other companies.    

The bank defines financial abuse as when money is used to gain control over another person, often a romantic partner, family member or an older person. The language, which is in the messaging fields of digital payments — similar to the "memo" field on a paper check — isn't necessarily vulgar; instead, the aggressor uses coercive language to gain financial leverage. 

Because the abusive nature of the message is often subtle, the abusers can often circumvent more traditional controls that are designed to vet language. The message can be designed to bully or shame people into sending money, or can use payments to dispense non-financial abuse, such as a person adding toxic language to a child or spousal support payment. 

CBA needed a new approach to this problem because law enforcement agencies don't immediately look at payment messages for signs of domestic abuse, and payment fraud vetting doesn't look for signs of relationship abuse. 

"Not all abusive language uses certain keywords which we can detect as being abusive," Wall said. 

In 2021, CBA enabled the CommBank mobile app and NetBank digital bank to block consumers from sending abusive words or phrases in transaction descriptions. The bank has since blocked about one million transactions. 

CBA's system uses a combination of machine learning, natural language processing and large language models on public data, text analysis and graph concepts to identify abusive relationships. Graph concepts, or graph theory, refers to combining different graphs and data sources with math to develop predictive models — which, in this case, can match certain words and phrases to a pattern of behavior. Large language models are capable of producing original content, and power emerging technology such as generative artificial intelligence programs. 

Financial crooks are using large language models to improve phishing attacks and malware. Banks such as JPMorgan Chase are using the technology to fight email fraud and other attacks embedded in financial communication. 

CBA's use is similar to Chase. The Australian bank is analyzing evidence of sustained abuse across criteria in payments, such as the value of the transaction, the frequency and velocity of transactions and the types of messages. 

In Australia, 40% of the adult population has suffered or knows someone who has suffered from financial abuse, according to research from CommBank and Deloitte, adding that the yearly cost is about U.S. $3.7 billion. In the U.S., financial abuse costs about $28 billion in 2022, according to FinCen, adding that three quarters of the victims know their abuser. 

Financial institutions take a variety of approaches to fight the abuse that can result from transaction messaging. Landings Credit Union in Arizona is among a group of financial institutions that are using dementia training to help staff protect the credit union's older members. And elsewhere in Australia, Westpac enables customers to click buttons on digital transactions to report inappropriate messages. Westpac monitors language in outbound transactions and blocks transactions with messaging that is deemed consistent with abuse or fraud. 

PayPal and Venmo also have a mechanism to report and monitor transaction messages for signs of abuse or fraud. 

CBA built its model in partnership with AI firm H20.ai. It is available on GitHub, a large global platform that hosts source code. 

"This means that any bank can choose to use the source code and model to monitor and detect high-risk transactions that may constitute financial abuse," Wall said. "From there, they can investigate and take further action if they choose. Helping to address financial abuse is an issue for everyone. And the benefit will be for everyone."

AI is widely used to fight financial crimes such as money laundering creating a potential runway to use the technology to combat financial abuse. "AI is already applied to many digital payments already, in sanctions screening and fraud for example, so financial abuse is a natural extension in many ways," said Gareth Lodge, a senior analyst for payments at Celent.

Some digital payment systems, such as the New Payments Platform Australia, are able to include emojis as well as text, Lodge said. 

"While many are innocent — 'we'll have a blast at the party tonight' — others are more sinister, and sadly there are cases of harassment using the text fields," Lodge said. "Understanding [the good from the bad] is something that AI will be able to help with."

For reprint and licensing requests for this article, click here.
Payments Artificial intelligence Fraud
MORE FROM AMERICAN BANKER