Big Brother has its eye on bank employees

It sounds like a good idea — at least from a risk management perspective.

Large banks have begun using IBM’s Watson artificial intelligence software to monitor employees for signs of misconduct.

Between six and 12 large banks are piloting the use of Watson for surveillance and deeper analysis of consumer complaints, according to Marc Andrews, vice president of Watson Financial Services Solutions.

“We started off applying this technology toward regulated employees like traders, and identifying behaviors and other patterns that may be indicative of potential misconduct,” Andrews said. “Since then we’ve expanded to look at other types of conduct issues beyond trader communications.”

But high-tech snooping efforts face technical, legal and social limits. A recent move by some London financial institutions to install motion sensors to monitor employees' desk activity raised eyebrows. Moreover, workplace privacy rules require full disclosure in many instances, complicating banks' chances of finding bad actors, such as middle managers applying undue sales pressure on branch staff.

Watson is looking for the kinds of warning signs that could have uncovered problems at Wells Fargo, where employees opened as many as 3.5 million fake checking and credit card accounts and wrongfully repossessed 200,000 cars.

It is on the hunt for tipoffs like the suppression of account notifications, mismatched contact information and suspicious login times. And it is listening to and reading customer complaints that arrive through call centers, text messages and emails to help banks identify serious risks. It is learning to distinguish between operational failures like ATM or website outages from potential misconduct like steps to disguise inappropriate account openings.

At a recent conference, chief compliance officers at Canadian banks said they’re now under increased pressure to detect potential conduct issues because of publicized cases of insider trading, money laundering and fraud.

IBM Watson-October2017

“The banks don’t want their competitors to get caught with things because when they do, it generates increased scrutiny for everyone,” Andrews said. “They said it’s not an issue of whether or not it’s within their culture or whether they have a bad culture; the bigger issue is any bank, no matter how strong its culture, is going to have one or two bad ducks.”

The trouble is, many banks have more than one or two bad ducks. There are systemic and cultural problems such as outsize expectations set by top and middle-management executives that force staff far down the corporate ladder to scramble to meet their sales quotas.

An AI engine should be able to find problems like unauthorized account openings. But can it find the source of the misconduct?

Watson is trained to read emails looking for tone. It can spot anger, sadness, happiness, over-aggressiveness or a warning.

“We could identify if there are high-pressure emails being sent out to the salesforce,” Andrews said.

But if the technical ability to read and understand all employee communications is there, the legal ground is shaky.

Regulated employees — such as traders, whose communications have to be monitored according to securities laws, and CEOs, whose calls and messages can be watched under the Sarbanes-Oxley Act — are fair game for surveillance tools.

But unregulated employees are protected by data privacy regulations.

“If a person is not in a regulated sector, [companies] would need employee consent to be globally monitoring what an employee is doing,” said Christine Duhaime, managing partner at Duhaime Law in Vancouver.

Some rules require companies to obtain consent in the terms of employment, when the employee agrees to take the job.

But the consent to be monitored may not be buried in the fine print, she said.

“That’s the whole issue with informed consent,” Duhaime said. A company needs to explain the reasons it’s monitoring the employee and the types of data it’s collecting, as well as how that information will be used and shared.

“If the employee understands all that and gives consent, fair game,” she said. “But it can’t be buried in a five-page employment contract where you don’t understand what you’re signing.”

The rules are different for legitimate investigations of suspicious activities.

“In a legal process they can get any evidence they want related to company emails,” Duhaime said. “But you can’t, in anticipation of being sued, breach privacy law and collect information you’re not authorized to collect for a potential investigation down the line.”

If a company suspects an employee of wrongdoing, it has the right to examine emails.

“As long as the company has given employees advanced warning that if it suspects wrongdoing or if the employee is engaged in wrongdoing, it can monitor their computer activity,” Duhaime said. “That is fine because it’s probably in the policies and procedures of the company. But if it’s not, and there’s no legitimate investigation going on, they can’t go on a global fishing expedition to monitor people without consent.”

This is why Watson is sorting through complaints.

Compliance experts from Promontory Financial Group, the consulting firm IBM bought last year, have been teaching Watson to understand complaints. Subject matter experts read through the complaints and annotate them to identify themes.

For instance, if a mortgage borrower called into a bank and used the word “discrimination” in the conversation, that would be simple for a traditional compliance system to pick up. But if someone said they feel they were denied because of age, race or gender, software might not understand. A Promontory consultant, however, could easily label the complaint as indicative of a claim of discrimination.

Watson is also looking at performance and other HR-related data.

“There might be some salespeople who were middle or bottom of the pack in performance and all of a sudden have a jump in performance and become one of the top sellers,” Andrews said.

It searches for high concentrations of business done in certain customer segments.

“Does a certain person seem to be doing a bunch of business with people over 75?” he said. “That might be an indicator of elder abuse or elder fraud.”

Another challenge with AI-based employee surveillance is that often companies know what to look for only after a scandal has broken. This is the way it is in cybersecurity: Once a signature or pattern of bad behavior becomes known, and you program your systems to detect it, cybercriminals change their tactics to something you won’t be able to recognize.

Andrews said Watson can expose suspicious behaviors and communications before they become major issues. “These things don’t happen overnight,” he said.

At Wells Fargo, Jonathan Velline, executive vice president, said the main technology the bank is using to combat misconduct risk is authentication.

“The more we can be certain the customer was present in the branch, agreed to the terms and conditions of products or services we opened for them, the more certain and comfortable the customer feels and we feel as an organization,” he said. “We’re embedding in all our workflows customer authentication using either a card or smartphone. That allows the customer to register their acceptance of terms and conditions and consent for these services.”

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Artificial intelligence Crime and misconduct Bank technology Commercial banking Machine learning
MORE FROM AMERICAN BANKER