Fintech circles are abuzz about the possibilities for artificial intelligence to streamline compliance work at banks in the wake of IBM's deal to buy Promontory Financial Group.
While Big Blue intends to have Promontory's regulatory consultants teach its Watson computer what they know, the industry's compliance officers likely won't be out of a job anytime soon. A lot of AI is still primitive and not what most people would call intelligent at all. But it could provide what David Weiss, a senior analyst at Aite Group, calls "smart assists," in areas "where human beings readily acknowledge we can't do it all, we can't throw enough human beings at the problem."
AI software could help separate false positives from true compliance violations, for instance, by flagging the most urgent money laundering cases. It could help with trade surveillance, by applying natural language understanding to traders' emails and chats, looking for signs of rogue behavior. It could help detect illegal employee behavior in other areas, including opening fake accounts. It could help compliance officers read and parse through lengthy regulations. And it could help with regulatory exams and reporting.
Parsing Laws and Regs
One potential task for artificial intelligence in compliance is ingesting lengthy regulatory documents, such as the 3,000-plus-page Dodd-Frank, and updates to those regulations. The software can use natural language understanding to pick out the specific rules within them, and send those to the people and departments that need to comply.
"A regulation consists of a lot of text, and contained in that text is all the requirements, the things you have to do or have to not do," said Mike MacDonagh, director of enterprise risk management at Wolters Kluwer. "Pulling those out and working out what they're about and where they apply is a tough job. It's something firms spend a lot of money on. It's a huge challenge as well because the regulations change all the time."
Artificial intelligence software could start by finding words that imply requirement, such as "must" or "shall." Then it could identify the entities involved: "the firm must" or "the regulator will." Software could figure out the product or process affected, such as swaps, mortgages, client origination or anti-money-laundering compliance.
"If you can pull those out and tag them, then you can automatically or with very little help send those to the people in the organization who are likely to be interested in them," MacDonagh said. A new rule about client origination in asset management would go to the asset management team, for instance.
"You still need a person who can say that's right or wrong," he said. "But the work in terms of identifying and sending is hugely diminished and I think this is the basis of what IBM and Promontory are talking about. We do some of that already, mostly manually, but we've started to use simple natural language processing. What they add is scalability."
Regulators are already starting to apply automation to the way they deliver their rules, MacDonagh noted. They're looking to build common vocabularies, ontologies, and schemas for financial regulation. One group in Ireland, the Governance Risk and Compliance Technology Center, has several university professors working on this. The Consumer Financial Protection Bureau already makes some of its regulatory information available in XML format.
When regulators have the common language they need to produce XML formatted regulations or updates that systems can understand, there will be less need to have Watson or consultants interpret it. However, such initiatives always take a long time. The EDM Council and the Object Management Group, for instance, have been working on their version of a common vocabulary for the financial industry, the Financial Industry Business Ontology, for more than a decade.
AI could also be used to assist regulatory examinations and reporting. "When an examiner comes in, effectively they're asking a series of questions," MacDonagh pointed out. Artificial intelligence could be used to create an environment where regulators could ask their questions of the computer.
Catching Money Launderers
One large North American bank is considering the use of AI in anti-money-laundering work.
"There are so many manual processes in AML," said a compliance executive at the bank, who asked that neither she nor her institution be identified. "People are taking pictures of documents all day long and grabbing information from different systems. A robot could alleviate this drudgery."
The bots could act like personal digital assistants, collecting all the documents and data needed. "And then humans can do their human job – analyzing the information, finding red flags," the executive said.
However, she said she doesn't see jobs being cut in favor of the bots, because AML workloads are increasing due to changes in regulation. "It's more of a way to absorb the increasing workload," she said.
David McLaughlin, CEO of QuantaVerse, whose AI software is used by large banks and card issuers to improve their AML processes, estimates about 75% of the work human investigators do on money laundering cases could be automated.
"I don't think that means a reduction in head count, it could mean existing investigators focusing on the 10% of flags that aren't false positives that need to be more thoroughly investigated," he said.
Software could also help with the recurring problem of overwhelming numbers of false positives.
"Investigators are under tremendous pressure to complete an investigation and either move it along or close it down," McLaughlin said. "There's always the question of those false positives, of whether the investigator has missed something. Is money laundering getting through the cracks of a human investigation process?"
If software can handle the bulk of the data gathering and pass a rudimentary judgment on whether or not a case is worth pursuing – an automated grand jury of sorts – human investigators can focus on the complex cases.
Similarly, artificial intelligence's ability to analyze vast amounts of data, looking for patterns and to scrape the dark web and the deep web, can be applied to know-your-customer processes, McLaughlin said.
What software can't do is make decisions in gray areas.
"If a human can see a pattern of behavior that's identified as money laundering, the computer can easily replicate that to say 'yes, that's money laundering,'" McLaughlin said.
But when it comes to creating a suspicious activity report, human judgment is needed. "I don't think it's culturally possible yet to offload that decision making about accusing somebody of committing a crime, which is basically what a SAR does, to a computer," he said.
Catching Rogue Employees
Credit Suisse and the Silicon Valley sensation Palantir are developing AI software that monitors employees to try to catch rogue behavior. Their joint venture, called Signac, is initially focused on detecting unauthorized trading. Over time its technology will watch all employee behavior and try to catch breaches of conduct rules for Credit Suisse and potentially other banks.
Wolters Kluwer and other vendors already offer such Big Brother software, MacDonagh said. "Some are using AI and they're able to learn as they go," he said. "That's the key, because machine learning gives you the ability to get better at it. The fewer false positives you have, the better your ultimate solution will be. The things that are wrong will stand out better."
AI software could catch employees opening fake accounts by looking for multiple accounts opened with the same email address, for example. Or network traffic could be analyzed to look for several accounts have been opened from the same IP address. Fuzzy logic and anomaly detection could be used to identify email account names that are likely to be fake or fraudulent, such as firstname.lastname@example.org or email@example.com.
Speaking of that recently scandal-plagued bank, "the use of artificial intelligence could easily be applied to the problem that Wells [Fargo] found themselves in," McLaughlin said. "The techniques of finding anomalous patterns of behavior would be perfectly applicable to a case of employee fraud."
It's easier to find than many types of fraud, he noted, because it requires only bank data; there's no need to pull in open source, social media, deep web or social data to support the investigation.
Natural language processing could be used to extract information from calls and email complaints from customers who didn't know they had accounts and or who had money transferred from one account to another without their knowledge.
"If management had a clue and was asking the right questions, and good data science was put in place, they could receive indications or flags on their desktop to say there's something going on here –- our number of bad email accounts and customer complaints on unknown accounts is through the roof," McLaughlin said.
There may be cases where people simply fill out a form incorrectly or mistype their email address. It's also possible for employees to fake email addresses for customers, to enable them to open an account. But when fraudulent email addresses appear on dormant accounts, there's a strong chance something is wrong.
What Will Regulators Say?
Bank regulators are thinking about the implications of AI and meeting with AI vendors.
"We're in beginning conversations with a couple of the regulators now," McLaughlin said. "They have not come out and opined one way or the other. From conversations I've had, the mood is one of serious interest about it."
Banks, he said, worry that a regulator could question whether an AI-using bank that employs fewer investigators or submits fewer SARS than its peers is doing the job properly. Another challenge is that regulators have gotten stricter about vendor due diligence and privacy. QuantaVerse has been going through a detailed and lengthy vendor due diligence and data privacy examination from a large bank.
"But the regulator conversations are increasingly interested in the ability of technologies to further fight financial crimes and the downstream, financing terrorism," McLaughlin said. "They're very open to this."
Editor at Large Penny Crosman welcomes feedback at firstname.lastname@example.org.