BankThink

Don’t let AI become a black box

Experts estimate that by 2030, artificial intelligence will provide banks with over $1 trillion in cost savings. New use cases are being identified in front-office, middle-office and back-office activities. We have seen banks adopt piloted AI initiates to automate underwriting, address labor inefficiencies, customize product offerings and identify fraud patterns.

There is little doubt that AI will power banks of the future — the success of this, however, will depend significantly on the degree to which organizations can effectively prevent AI from becoming a black box.

Intelligence allows us to learn from experience, adapt to new situations, understand abstract concepts and evolve accordingly. In theory, AI has this same potential, but today’s banking use cases are simple, deterministic in nature and relatively benign. Bank management is justifiably wary of trusting a machine to perform sophisticated tasks. Emerging AI risks include bias in lending decisions, a higher probability of insider threat and blatant discrimination. These new risks create a lack of trust in AI and present an obvious hurdle for its widespread adoption. Regulators are starting to take notice, including Federal Reserve Gov. Lael Brainard, who identified both opportunities and concerns regarding AI in a November speech.

To date, banks have done an admirable job segregating AI use cases to low-risk environments with additional human oversight to supplement and control risk. Today this approach makes sense, but it is not sustainable for widespread growth and adequate ROI. It is quite likely that in order to achieve adequate cost savings, organizations will eventually feel pressure to remove some elements of human oversight. In addition, nonregulated financial services innovators, including fintech vendors, do not have the short-term economic incentive to keep diligently managing oversight of the black box. Adoption and innovation of AI in financial services will grow, but this development will be fraught with dangerous new risks.

Understanding how these risks emerge requires some basic knowledge of AI. AI learns from large datasets feeding algorithms, somewhat analogous to how humans learn based on how (and how fast) our brains “feed on” information. All datasets have the potential to be biased, just as all human beings have the potential to carry bias. Biases in AI can occur due to insufficient data, inaccurate data or black box algorithms misinterpreting connections in the data.

Potential for AI bias can theoretically impact consumer banks that adopt this technology, including biases in gender, race and ideologies. Bias can also creep into use cases when human model developers are not in sync with the business logic or the proper business context to understand and triangulate legitimate outputs. Unfortunately, these issues are oftentimes not identified until it’s too late and perilous new risks have manifested and amplified.

Current regulation — including model risk management (SR11-7), which is the current framework that banks utilize to manage model governance, policy and controls — is not specific enough to manage potential AI risks. This regulation was designed to address the likely unintended consequences emerging from decisions based on static models. The real-time, self-learning nature of AI renders this guidance relatively ineffective.

The current approach was designed for static models; it was never intended to be utilized for real-time dynamic AI software. In addition, current regulation does not require fintechs to be evaluated with the same level of intense scrutiny as banks.

The current controls — or, more appropriately, the lack thereof — will lead to dangerous new threats in AI. The lack of specific AI controls could result in biases in automated credit decisions as AI is more broadly deployed. For example, AI has the capacity to automate and enhance biases toward specific customers while potentially alienating other customer profiles, impacting the general availability of credit in unknown ways. In order to address these concerns, banks and fintech firms that are creating AI software should be subject to dynamic, real-time model risk management and model validation to create proper guardrails and thus avoid costly biases while building trust in AI. This type of smart reform will actually improve innovation by building trust for all stakeholders.

Leading organizations — such as Google, Microsoft and DARPA — are researching new AI techniques to make AI more explainable and to address some of these risks and concerns. Explainable AI is a lofty objective, and it is likely many years away from paying dividends. In the meantime, banks should focus on accountable AI. Accountable AI first calls for human accountability: All those involved in creating the AI and data sets must be responsible for risk management and for mitigating the unintended consequences of AI. Executive leadership and some level of regulatory guidance for both banks and fintech firms can create a proper framework and strategy for managing those risks.

Although it might seem counterintuitive, proper risk management techniques are the key to unlocking the full potential of artificial intelligence in the banking industry — and fostering desperately needed trust.

For reprint and licensing requests for this article, click here.
Artificial intelligence Cognitive computing Workforce management Fintech
MORE FROM AMERICAN BANKER