BankThink

Don't underestimate AI's risks

Artificial intelligence technologies have already begun to transform financial services. At the end of 2017, 52% of banks reported making substantial investments in AI and 66% said they planned to do so by the end of 2020. The stakes are enormous — one study found that banks that invest in AI could see their revenue increase by 34% by 2022, while another suggests that AI could cut costs and increase productivity across the industry to the tune of $1 trillion by 2030.

The message is clear — banks need to be investing in and experimenting with a range of AI technologies across many use cases to stay competitive. The impact of these technologies — including machine learning and deep learning, natural language processing and generation, and computer vision — will upend markets and operations across bank business lines. It will also be comparatively swift, as banks are already sitting on massive troves of data to train and fine-tune AI models. Many other industries are still in early stages of digitization and lack those critical data stockpiles.

However, given the stakes and speed with which AI is changing the industry, it’s critical to be aware of risks on the horizon. As AI becomes more embedded in banks’ most critical operations, particularly in ways that impact the financial stability both of institutions and their customers, it could expose banks to new hazards. Two of the most dangerous and far-reaching areas of risk when it comes to AI in banking are the opacity of some of these technologies and the vast changes AI will inflict on bank workforces.

One of the chief challenges with AI, particularly deep learning models, is their opacity. While these models often prove over time to be more accurate than human decision-making, they often don’t reveal how they generated their conclusions.

This opacity could open banks up to risks without their knowledge. As AI increasingly makes decisions that affect customers or banks’ balance sheets, both regulators and the public could become uncomfortable with those decisions being made by these so-called black boxes that could have hidden biases in their decision-making. The Federal Reserve has published guidance requiring that banks should be able to validate and assess the decision-making of their analytics tools. Additionally, a recent research study found that fintechs that used AI models in loan underwriting charged minority borrowers higher interest rates. This issue could draw increased regulatory scrutiny in the future, as well as public backlash if customers find out they were negatively impacted by a model’s biased conclusions.

The good news for banks is that AI academic researchers and technology providers are making progress in reducing the opacity of these technologies, making it possible for them to explain how they computed their results. Banks must pay close attention to this progress and should expect that the ability to explain how AI models generate their conclusions will be a major point of discussion with both regulators and technology vendors going forward.

While opacity raises the risk of backlash from regulators and consumers, the impact of AI on banks’ operations could draw backlash from a third group — their own employees. Banks could face new risks resulting from both the potential automation of jobs, as well as the ways that AI will reshape the way their employees perform their jobs.

Citi notably drew attention to the potential of job losses in the sector as a result of AI adoption when it publicly announced in 2018 that it could cut 10,000 jobs by 2023 because of AI automation. Technology-driven automation has been transforming the sector’s workforce for decades, but AI could greatly accelerate that trend. Additionally, a wealth of research and media coverage has drawn the public’s attention to the potential for massive AI-driven labor market disruptions. In the long-term, such disruptions could become a public policy issue if some of the more dire predictions prove true. Large-scale layoffs could draw the ire of both employees and the broader public, damaging banks’ reputations, and potentially affecting their ability to recruit new talent and customers.

In the near term, banks will face challenges stemming from AI changing employees’ tasks and routines. One survey found that only one in four banking executives believe employees are ready to work with AI. Employees need to not only be trained to work with AI technologies, but also need to buy-in to using them in their day-to-day tasks. That can be a major challenge, as employees may be suspicious of technologies that they fear could automate their jobs, and also may simply trust their own decision-making more than AI-driven insights. To take a notable example from another industry, UPS spent significant time and money training drivers and gathering feedback from them when it rolled out an award-winning AI-based route optimization solution for them. Many of the drivers simply didn’t agree with the routes computed by the solution, an obstacle that took a careful approach and substantial effort to overcome. Banks should plan to similarly spend heavily and plan carefully to earn employees’ trust in new AI solutions, or the solutions may be underutilized, putting ROI at risk.

Any massive force of change comes with both risks and opportunities. Banks are right to increase their investment and experimentation in AI technologies. Otherwise they’d put themselves on a path to becoming obsolete. But those investments must be made as part of an overall AI strategy that accounts for these risks, and includes careful consideration of how to mitigate, or, at least, minimize them.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Workforce management Fintech regulations Fintech
MORE FROM AMERICAN BANKER