Fraudsters have a new use for generative AI: Phishing

Data And Text On Computer Screens
An email security company has found a 12-fold increase in the number of phishing emails it has seen since the advent of ChatGPT, and malicious models may be to blame.
Chris Ratcliffe/Bloomberg

If you feel as though you have received more phishy text messages and emails in recent months, you're probably not imagining it.

Cybersecurity experts have been warning for months about the ability of fraudsters and cybercriminals to use large language models to write malicious code and craft phishing emails and texts more efficiently, which could make it easier for novice attackers to operate more effectively and seasoned fraudsters to reach more potential victims.

Cybersecurity firm Slashnext said in a recent report that it has seen more than a 12-fold increase in malicious emails since the launch of ChatGPT at the end of 2022. As the company has documented previously, cybercriminals have circulated malicious chatbots that can help craft these emails, the vast majority (68%) of which are attempts at compromising the potential victim's business email.

It is unclear how many of these emails are handcrafted, written by legitimate models such as ChatGPT or Anthropic's Claude, or created by malicious language models. While many products have safeguards in place to ensure users do not use them for illegal, harmful or fraudulent activities, there have also been numerous attempts to overcome these protections.

Cybercriminals have used malicious AI models to write malware and phishing emails and automate other fraudulent activities including payments scams. This type of automation is the true cybersecurity threat that language models in particular pose, according to Bruce Schneier, chief of security architecture at data infrastructure company Inrupt.

"Today's human-run scams aren't limited by the number of people who respond to the initial email contact," Schneier said in a blog post. "They're limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that."

According to Schneier, it's not all that important whether emails written by a malicious language model are more or less convincing than those written manually by a fraudster. The important part is that, in the right situation, some people can be tricked by what others might see as obvious fraud. If fraudsters can reduce the amount of work it takes to reach those people, they will be able to steal more money.

Malicious language models provide fraudsters with the solution they need to write more and better phishing texts and emails, and these models have proliferated. These language models can also generate fraudulent content from fake marketplace items to fake job listings and fake recruiter profiles, all of which can serve as an in for fraudsters.

"A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun," Schneier said. This is due to efforts at improving the efficiency of language models to the point that models like Facebook's LLaMa can run fast and cheaply on powerful laptops, he said.

One popular method of responding to these fraudulent emails is automated filtering. Slashnext itself offers security products tailored to identify emails and text messages generated by AI; other vendors providing such services include Cloudflare Area 1, Mimecast, Avanan, Barracuda, and many others.

The primary advice cybersecurity experts give to fight back against phishing and other forms of fraud is regular training for employees — not just one-time reminders — to establish a culture of security awareness at work. Researchers have found that personalized phishing training (i.e. tailoring lessons and test emails to individuals) has far greater efficacy than one-size-fits-all curricula.

Companies tend to underestimate the potency of social engineering attacks such as phishing — and, by extension, the efficacy of training employees to identify these attacks — according to Roger Grimes, a data-driven defense evangelist at the security awareness training platform KnowBe4.

"Every organization should focus more on defeating social engineering and phishing and less on other types of attacks that are far less likely to happen," Grimes said. "It is because nearly every business fails to adequately focus on social engineering as the number one attack vector, by far, that allows hackers and their malware creations to be so successful."

For reprint and licensing requests for this article, click here.
Technology Fraud Cyber security Phishing
MORE FROM AMERICAN BANKER