BankThink

It takes more than developers to make AI pay off

In a recent American Banker op-ed, Synchrony CEO Margaret Keane made the bold statement that “AI innovation must be for all.” Keane argues that companies have both a business and moral imperative to train existing employees to work with emerging artificial intelligence technologies.

She is right — and not simply because it is the right thing to do. Corporatewide training is the only way to ensure the success of these new technologies.

AI has the potential to bring efficiency and precision to bank operations, like marketing and fraud detection, while its customer-facing uses could help extend credit and new financial services to more consumers. But at the end of the day, it won’t be just data scientists who are using AI — it will be the sales teams, the fraud analysts, the relationship managers and the contact center employees.

Yes, we will still have contact centers. AI is being used in bank contact centers to filter calls for efficiency and security, but contact center employees must still resolve complex questions and complaints that the AI cannot. And if the employee finally picking up the phone on the other end of the line doesn’t understand how the technology is supposed to filter or screen, how will the company know if the AI is even operating correctly? Who is monitoring these tools when the AI developers are too busy developing?

An employee cannot be the best programmer, hold a doctorate in mathematics, manage compliance risk and understand all the nuances of the first line of business. These super heroes do not exist — at least not enough to staff a corporation. Companies must cultivate them by assembling teams with diverse skill sets and cross-training associates in data analytics.

What does this look like?

The fraud landscape is a great example of an area in banking that will benefit immensely from emerging AI technologies combined with analytics cross-training. Fraud touches multiple lines of business from sales and customer support to technology development and risk management. As banks expand into the digital arena with products like online loan origination and person-to-person payment platforms, AI solutions that detect synthetic identities and protect against cyberattacks will help companies stay one step ahead of the fraudsters.

While AI today cannot duplicate the more abstract intelligence of the human mind, when put to narrow topics like flagging transactions for potential fraud, AI can quickly discern complex patterns within certain types of data. However, the development of AI models alongside the rise of big data and computational power leads to emerging AI risks — such as transparency, stability, fairness, performance and the replacement of human-directed decision-making.

This is where analytics cross-training becomes paramount. The successful deployment of any new AI technology, whether used for fraud detection or customer interaction, requires the coordination of model development, model risk management, compliance risk, operational risk, technology and ultimately the line of business that will be using the model. These individuals need to be working in concert at the outset of model development, and any employees using or interacting with the tool should be equipped with the necessary analytics skills to successfully monitor its output for both accuracy and fairness.

In a November speech, Federal Reserve Gov. Lael Brainard cited a recent example of poor data governance that led to unintentional bias against women in an AI hiring model. Amazon’s developers unwittingly trained the machine-learning model using a dataset of resumes with past successful (and predominantly male) hires over the last ten years, thereby teaching the program to favor male candidates — despite developer attempts to neutralize gender in the dataset. Fortunately, close monitoring of the program’s output of suggested hires and rejections uncovered the bias in the model, leading Amazon to re-evaluate its hiring methodology.

The case study underscores the importance of executing sound data governance and developing a well-defined ongoing monitoring plan at the outset of any machine-learning development. Amazon’s replacement of human-aided decision-making came with inherent model risk, which in the case of a bank might include fair-lending-law violations. These regulations prohibit discrimination against protected classes — including but not limited to race, national origin, gender and age. The requirements for fair lending combined with emerging AI risks necessitate guardrails for AI models that are more extensive than traditional modeling techniques.

One of these guardrails involves educating model users to understand how data is used to develop AI models, especially since the model output itself is often used to retrain the model to adapt to, for example, a changing fraud landscape. From a compliance perspective, the line of business should understand what variables are used in the AI model and how they might potentially correlate with protected class variables, like gender in the case of the Amazon hiring model.

So, how can corporations best facilitate this type of cross-collaboration among developers, risk management and line of business to ensure AI success?

An internal analytics training platform is one method that can be both cost effective and produce immediate results. Corporations already spend thousands of dollars each year sending a mere handful of employees to AI-related conferences. But they can multiply existing resources by creating a stage for the company’s top data scientists to educate other employees through monthly “lunch and learns,” a 12-week data science course that meets weekly or an internal rotational program. For little to no additional cost, data science courses through online learning platforms like Coursera, DataCamp and Edx can be paired alongside these internal training efforts to enhance learning and facilitate conversation.

Conversation across business units is one of the tertiary benefits of analytics cross-training, and it is also critical to AI success. When employees network, they learn information that can be used to save a company time and resources, like the existence of a duplicate technology that could be decommissioned, or an AI tool that could be better leveraged to serve additional business needs, or simply the existence of a person with the expertise or skills to take a project to completion.

Regular analytics training, collaboration and conversation needs to be part of the culture of the company. The success of an internal training platform depends on executive sponsorship and management’s recognition of those individuals that set aside their time to both organize and teach the classes as well as participate in them. Instead of perpetuating corporate silos, which create rigid barriers to new technology adoption, companies to should reward and encourage employees to share information about new knowledge, tools and processes.

For reprint and licensing requests for this article, click here.
Artificial intelligence Workforce management Workplace culture
MORE FROM AMERICAN BANKER