Why cybercriminals like AI as much as cyberdefenders do

Artificial technology may escalate a long-running arms race between financial institutions and cybercriminals.

The technology is helping banks’ cybersecurity teams detect and deal with breaches. Unfortunately, AI also creates new vulnerabilities in systems, since leaving machines in charge opens up opportunities for mistakes and manipulation. Further, AI helps attackers do their jobs more efficiently. For example, in attacks carried out last year, the writers of the Petya malware used AI to identify vulnerabilities and scan millions of ports in seconds to find the holes.

“AI is a hammer that can be used for good or bad,” said Jim Fox, a partner, principal and cybersecurity and privacy assurance leader at PwC. “And if your adversaries have a hammer, you'd better have one, too.”

In the right hands, this mighty hammer can do a lot of good. Artificial intelligence software can monitor all network activity and quickly discern odd patterns that could indicate foul play, even if such patterns haven’t been flagged before. It can learn over time to discern truly suspicious behavior from normal patterns.

New Image: robot_hands_on_keyboard_adobe.jpeg

At the New York-based investment bank Greenhill & Co., Chief Information Officer John Shaffer sought a better way to deal with zero-day attacks.

“Most of the threats we’re dealing with now aren’t solved by traditional tools like signature-based antivirus [software], or anything that has a signature,” Shaffer said. “The real threat actors know how to get by them. What you’re really interested in is trying to figure out what the smart actors are doing. That’s where machine learning and AI come into play.”

Shaffer installed an AI-based system from Vectra that watches all network traffic at Greenhill. It spots anomalies that standard intrusion detection software can’t see, he said. (Other companies offering AI-based or enhanced cybersecurity products include IBM, Darktrace, FireEye and McAfee.)

When Shaffer deployed the software, it immediately alerted him to some odd traffic patterns on the bank’s network that turned out to be the work of the firm’s own vulnerability scanner.

“That to a lot of systems would look like somebody doing a scan on your network,” Shaffer said.

Vectra doesn’t send out large volumes of alerts, as so many security systems do; it sends maybe 10 in a day. And it doesn’t require rules to be written — it learns on its own the difference between normal behavior from deviations — saving time and effort for security staff.

The dark side of AI

The U.S. intelligence community has raised a litany of concerns about the use of artificial intelligence: that it increases vulnerabilities to cyberattacks, raises difficulties in attribution, facilitates the advances of foreign weapon and intelligence systems through technology, increases the risks of accidents and substantially increases liability for the private sector, including financial institutions.

“What the U.S. government has said is if everything is run by machines and there’s no more human intervention, then AI is our total law enforcement, our total gatekeeper of everything,” said Christine Duhaime, an attorney at Duhaime Law, based in Toronto. “So the more we get interconnected — the more there are systems deciding what’s safe, what’s good, what’s bad — the more it’s going to be vulnerable because we’re counting on our systems to be smarter and better than the hacker in another country who wants to do us harm.”

The government has also said AI could increase the risks of accidents and substantially increase liability for the private sector including financial institutions, she pointed out. In other words, the more systems decide things on their own, the greater the likelihood they’ll make a massive mistake that’s really hard to undo.

“It could be it pays the wrong person and that person is gone or it reverses a bunch of transactions or it pays out $1 million to 1,000 people,” Duhaime said. “Or the systems are more exposed and therefore a whole lot of people’s personal information is used in inappropriate ways for which they could be extorted from. A rogue actor with superior computing skills can and will outcode our systems and can do all sorts of harmful acts to our critical infrastructure, including financial infrastructure. Historically, the private sector has never invested enough in cybersecurity ... and until they do that, there will be vulnerabilities.”

The fact that cybercriminals are using AI is all the more reason the private sector needs to do the same, said PwC's Fox, citing the Petya example.

“Those decisions were made at the machine level, at machine speed,” he said. “Nobody was guiding that malware. They wrote an intelligent program to do all that. The only way you're going to defeat that kind of intelligence is with your own.”

One bank with strong cyberdefenses and AI tools had 14,000 machines infected by Petya in two minutes, Fox said. Its AI system alerted the security team within five minutes that something weird and maybe malicious was going on, and the decision was made to lock down and turn off all the machines.

How criminals use AI

Steve Grobman, chief technology officer for McAfee, pointed out that whenever new technology comes along, it’s adopted by cybercriminals as well as law-abiding people.

“The reason we have so much financial cybercrime is it’s a very efficient way to steal money with a lower risk of arrest and prosecution,” Grobman said. “The reason cybercriminals don’t knock over banks as much as they use malware or other cyber techniques is because the technology lends itself to providing better outcomes.”

McAfee, Grobman said, is studying “adversarial machine learning.”

“We’re looking at: how will bad actors attempt to poison models?” he said. For instance, crooks might introduce specially crafted data into the data sets the models look at that will make the models easier to evade in the future.

Or they might add noise or false signals to the input data set that makes it harder to tease out the true signal.

“If you overwhelm the defender with false positives, they’ll be forced to recalibrate their model,” Grobman said. He offers an analogy: If a thief wanted to break into a house that had a motion-activated alarm, he might ride his bike past the house every night at 11:00 to intentionally set off the alarm. After a few weeks, the homeowner would get fed up and either calibrate it to make it less sensitive or turn it off. Then the thief could go in.

Bad actors are also starting to use AI to automate formerly human tasks, Grobman said. For instance, AI can generate spear phishing emails tailored to an individual through tidbits found through email or social media searches.

“Instead of requiring humans to tailor content to the individual, they can en masse create content that is tailored to individuals and thus can have a higher victim conversion rate,” he said.

Grobman has mixed feelings about the government’s warnings about AI.

“It’s critical that we understand any threat technique to the greatest degree possible, so we can build the best possible defense around it,” he said. “But having the government issue a statement acknowledging these new techniques doesn’t make the techniques more relevant. The toothpaste is out of the tube and you can’t put it back in.”

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Cyber security Artificial intelligence Bank technology BankAI Conference
MORE FROM AMERICAN BANKER