Can AI chat with customers like a teller — and should it?

Tech companies have been working for years to train their virtual assistants to converse like humans. There are hundreds of so-called chatbots out there today, with a few stars like Apple's Siri and Amazon's Alexa.

In their efforts to be chatty and have personality, these programs at times misfire in embarrassing ways. Blame it, in part, on their immaturity.

Microsoft's AI-powered chatbot Tay was set up to respond to tweets and chats within the GroupMe and Kik messaging apps. Within a day, though, the chatbot started repeating misogynistic and racist remarks being made by users. Microsoft quickly took Tay offline and is reprogramming it.

Many of the early Facebook Messenger bots got terrible reviews for being unable to understand and perform basic tasks. For instance, when a user asked CNN's chatbot multiple times (using slightly different wording each time) to unsubscribe, each time the bot generated the same cute emoticon and replied, "Try again? Use a few words to tell me more about what you want to know about."

[Digital identity is broken, and fixes are urgently needed. Learn how large financial service and health care companies are tackling the issue – to enhance customer experience, to stake out positions in their business ecosystems, and to manage risk – on our Feb. 23 web seminar. Click here for details.]

"There are a lot of what we call low-IQ bots out there," said Zor Gorelov, CEO of virtual assistant software maker Kasisto. "Companies are racing to do chatbots and I think they're risking real damage to brand equity. Someone said bots are like apps 10 years ago. This is not true. Apps ten years ago you could use."

AB-011817-CHATBOTS

Of course, the early days of any technology are fraught with glitches, and a little patience is always needed during the development and improvement process.

But for financial institutions, which stake their business on being trustworthy and reliable, there's a certain amount of risk to putting a chatbot out there that could make embarrassing or serious gaffes. The banks that are experimenting with chatbots, which include Bank of America, Capital One, and Societe Generale, are sorting this out.

Societe Generale's Approach

Paris-based Societe Generale plans to roll out a chatbot based on Personetics technology later this year at BRD Groupe Societe Generale, its Romanian banking unit. At first, it will only answer questions for investors in its equity funds. If all goes well, the chatbot will be extended to retail products like deposit accounts and bill payment and it will be rolled out to other territories.

The bank is trying to focus the chatbot on a distinct set of about 15 topics, yet give it some flexibility in what the client can say and how the chatbot will respond.

"If we don't change the behavior of the bot, if we don't add new answers, the client will probably get bored," said Horia Velicu, head of the innovation lab at BRD Group. "So I want to improve it in time and in terms of personality. I'm trying to be somewhere in the middle — not very casual, because we are a bank, but also but also not dead serious. It's a challenge."

The bank has put a lot of thought into the kinds of questions the chatbot will answer.

"We did internally a lot of customer journey exercises to imagine all kinds of questions and pain points," Velicu said. "We aim to handle everything regarding the investment fund experience, starting from subscription, redemption, seeing performance, adding to the position, getting out, receiving alerts and so on."

The bank has already discovered a benefit to the robotic nature of the chatbot: it can walk a potential customer through compliance questions and disclosures more carefully than a human can.

"In terms of operational risk, it's safer for us," Velicu said. "We didn't think of that initially for this, that is a byproduct."

Societe Generale's chatbot is being trained to get things done rather than make small talk. Its next step will most likely be to pay an invoice or perform a funds transfer.

"Initially I don't want to bill this as customer care, I want to have it centered around product actions," Velicu said.

To this end, the Personetics software is being integrated with the bank's operations software to execute these requests. For Velicu, part of the appeal of having a chatbot handle tasks for customers is that new ones can be added quickly. To make an app do new things, new screens and buttons have to be designed. To make a chatbot handle something new, it just needs to be made more intelligent.

Another advantage is the chatbot will gather conversational data from clients.

"To have data about what the client really asks and what he's interested in is incredibly valuable, we will use this," Velicu said.

Velicu believes there are certain questions and problems that should always be handled by humans.

"The human aspect will remain important because sometimes you want reassurance," he said. "This is difficult for a bot to handle. When a client needs to know something is real, or he is not sure and has to double-check, this is difficult."

Sales of complicated products, like equity derivatives, are also probably best left to humans.

"For some niche products, it helps to have a human to explain the intricacies and to give this trust," Velicu said. "This will never disappear, I think. I don't see a person completely trusting the robot."

However, in a recent study, Accenture found that for some types of financial advice (about bank accounts and investments, for example) consumers trust guidance from an artificial intelligence engine more than a human.

"If they're going to be cross-sold or upsold something, they would trust the recommendation from an AI engine more than they would trust a person telling them that," said Stephanie Sadowski, managing director at Accenture. "Even with demographics that previously were more trusting of human guidance, now people question what's in it for the person selling this to me, are they really doing this in my best interest, whereas with AI, it takes that emotion or any incentives out."

It will come down to execution — if companies implement this the wrong way, and their chatbots push out offers that are not relevant nor personalized, the credibility of AI engines will go down, Sadowski added.

How Human Should it Be?

Personetics, which says it works with four of the top 10 banks in North America, has tried to make very clear the bot is just a bot.

"We don't believe in pretending these interactions are human," said Eran Livneh, head of marketing at Personetics. "We think it's always good to have the customer understand they're not interacting with a human, and keep human interaction as a secondary option."

The company also recommends a conservative approach to chatbot responses, to avert the danger of the chatbot saying something weird or wrong.

"The risk in providing a bad answer is much higher than saying I don't know, or you need to speak with a person," Livneh said. "We have to have a high confidence in the appropriateness of the response to deliver it to the customers."

At the same time, he argues the technology can provide responses that are as good or better than what humans can provide. One financial institution Personetics is working with is using the technology to replace live agents for certain tasks they do today, to free them up for more complex work.

The best role model for financial chatbots may be USAA's virtual assistant, which started out having a team of people compose answers to members' questions that were logged in libraries. Now, when a customer poses a question, it searches the answer libraries. If an answer exists, it returns that. If the customer has requested a simple action, such as block or unblock a card, the virtual assistant can do it. If the assistant can't find an answer in the libraries, it will route the customer to the help desk.

Despite the risks of chatbots making mistakes, banks can't afford to sit back and do nothing. Others, perhaps Alexa and Siri themselves, may one day direct customers to other financial institutions that can offer the convenience and all-hours nature of chatbots.

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Bank technology Artificial intelligence
MORE FROM AMERICAN BANKER