'Human, Please Look at This': Nasdaq Using AI to Spot Abuses

Editor at Large

Certain things make Valerie Bannert-Thurner raise an eyebrow when looking for signs of bad behavior on the Nasdaq exchange. Gloating in the chatrooms, for example.

"I like the example of excessive cheering because the guys just can't help themselves but cheer," said Bannert-Thurner, who is senior vice president and head of risk and surveillance at Nasdaq.

Another worrisome indicator is seemingly too-good-to-be-true trading profits.

"If people are excessively profitable given how they trade and in comparison to everybody else trading the same instruments with similar styles, then we ask, is this luck or something else?" Bannert-Thurner said. "You just can't outperform the market all the time."

But with 14 million trades a day on Nasdaq and innumerable chats and emails, she and her colleagues can't look at everything. Enter artificial intelligence.

This year, the exchange began using AI to help it detect market abuse. Bank clients of Nasdaq and its artificial intelligence partner, Digital Reasoning, are also starting to use the technology — UBS is one.

It's another place artificial intelligence is changing work in financial services, along with virtual assistants, back office operations, lending decisions, authentication, compliance, detecting cybercrime, finding fraud, and any place there's simple paperwork that could easily be automated.

According to independent market expert Anshuman Jaswal, who until recently was a capital markets analyst at Celent, Nasdaq is one of the first exchanges to use AI.

"This is an emerging trend, but not all leading exchanges and vendors have the same capabilities," he said.

Upgrading Nasdaq

To survive as an exchange, Nasdaq has to prove it's doing a good job of watching out for insider abuse and market manipulation — otherwise, firms won't use it.

"Our entire existence is based on having the best detection mechanism possible," Bannert-Thurner said.

Nasdaq operates 25 exchanges in Europe, Canada and the U.S.; this accounts for a mere 37% of its revenue. The rest of its business is providing technology to other exchanges and trading firms: software, market data, and the trading, clearing, settlement, risk and surveillance technologies it uses to operate its own markets. The company's SMARTS trade surveillance platform is used by 45 outside exchanges and 13 regulators. It's also used by 120 market participants, including broker-dealers, buy-side firms, and hedge funds that have to monitor what their employees are doing.

All are looking for the same things: market manipulation, insider trading, and other forms of abuse.

"The data they have accessible to them is different," Bill Nosal, the vice president of market technology at Nasdaq, pointed out. "Exchanges see things at a macro level and a higher level. The firms see the trading at their own firm and customer level with even greater detail into the traders and the salespeople involved in trading."

Buy-side firms tend to zero in on trading activities at sensitive times, such as the release of market research or an analyst report. 

In all cases, technology is being used to solve the same two problems, Nosal said. "One is, how many bodies do you throw at this, and can you use technology to reduce that. Two, can you use technology to look at the overwhelming amounts of data that humans simply can't."

Looking for Dodgy Behavior

Nasdaq, which describes itself as self-regulating, deploys large teams of surveillance people to maintain the integrity of its market, using SMARTS.

In addition to trades themselves, they monitor public market data, prices, spreads and the full order book.

"We need to understand, for example, how heavy is the participation of a member in the overall order book? Is that changing?" Bannert-Thurner said.

Nasdaq overlays the private account and subaccount data it gets from brokers and social media on the trading data to see how individual firms or accounts are doing in the context of the overall market. 

"Then we overlay behavior with intent by including information like communications like emails and chats," Bannert-Thurner said. Traders use about a dozen chat tools, including those from Bloomberg, Thomson Reuters and the bank consortium-backed Symphony.

The ability to align what traders say in emails and chats with their actual trades is critical to catching and prosecuting market manipulation.

"You get into jail not for saying, 'Hey, guys, that was well done,' rather because you manipulated the Libor," Bannert-Thurner noted.The trades themselves don't reveal the trader's intent. That understanding is more likely to come through reading their messages to one another.

This has been born out through several Wall Street scandals. For instance, U.S. regulators investigating Libor rate fixing uncovered many incriminating chats, such as, "COULD WE PLS HAVE A LOW 6MTH FIX TODAY OLD BEAN?" And "LOWER MATE LOWER !!" (These examples were from London traders; most of the New York traders' chats are unprintable.)

Where AI Comes In

It is upon these trader chat threads and emails that Nasdaq is unleashing artificial intelligence software from Digital Reasoning.

The AI software usesnatural language processing and machine intelligence to understand the language traders use and identify the key indicators of manipulation.

A trader might say something like, "Let's take this conversation offline," or "I'll call you on my mobile." Such phrases might trigger an alert and a look at trades that were made right after the message was sent.

AI can sometimes understand language humans can't, according to Tim Estes, CEO of Digital Reasoning. "You look at forex trading over Bloomberg chat, it doesn't even look like English," Estes said. "Could a human have caught it? If they were real expert, yes. But if it was an analyst who didn't know the space well, perhaps not."

Bannert-Thurner noted that even if a trader doesn't use a known phrase like, "I'll call you on my mobile," machine intelligence will pick it up, "because it's close enough and the context is the same. A lot of this hasn't happened before because the technology really wasn't smart enough to come up with high-quality results."

To teach the artificial intelligence engine, Nasdaq is feeding it alerts and chats from known cases of manipulation and collusion, along with the trades that were made at the time. 

"We think machine learning will make a significant difference in how we detect behaviors, how we help people review and investigate," Bannert-Thurner said.

The goal is to winnow millions of alerts down to small numbers of relevant ones. "We want to serve up great alerts and give all the context that's needed, so you could answer your seven or eight questions around, 'Should I dig deeper or not? Is this worth my time or not?'" Nosal said. " 'Can I protect my firm better by digging deep here or not?' "

In his former job as a compliance officer, "You were always on your own when you crossed that bridge from looking at trades to exploring communications and punching in key words into a system," Nosal said. "It's a forever kind of process. Creating efficiencies to look at this data is what it's all about."

Because artificial intelligence can learn from results and get better at scoring red flags as it goes, it should save time.

"The next time a similar alert comes up, we already know it's likely to be a true or false positive," Bannert-Thurner said. "The ones that are potentially false positives we probably don't need to spend much time on, and we can calibrate our alerts and help the system refine itself."

Estes noted that exchanges and trading firms spend a lot of time doing manual reviews, inspecting emails, listening to calls — "all kinds of stuff that's kind of invasive yet necessary," he said. "When you have millions of emails going through the system every day, you can't afford thousands of people to read those and analyze the ones that are there."

The Digital Reasoning software will automatically screen all the threat indicators before humans see them. "That keeps the false positives down and makes it less invasive in many ways, because then only the things that are well qualified as risks get elevated," Estes said.

This cuts to the heart of the value proposition of AI, Estes said.

"It's not so much replacing what humans do, it's learning from humans and then being able to scale what they do in certain processes," he said. "No machine is going to say 'This person broke the regulation,' not yet. But they might say, 'This looks problematic; human, please look at it.' "

Eventually Nasdaq will include voice conversations in the AI's analysis. "That's at early stages at the moment," Bannert-Thurner said. Same for social media, which is only slowly finding its way into firms.

"You're just not going to see the allowable use of Facebook, Snapchat, Twitter or Instagram, those are not on the desk yet," Nosal said. "But the millennials are coming, and those are the communication channels that are going to be demanded."

Humans will always be involved, Nosal said.

"I sometimes think of it as MI5," he said, referring to the British intelligence agency. "You'll always have the agent who says this looks suspicious or not, I should pursue this or I shouldn't." AI helps the analyst become a more sophisticated investigator.

Editor at Large Penny Crosman welcomes feedback on her posts at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Bank technology Fraud detection Analytics Compliance systems
MORE FROM AMERICAN BANKER