BankThink

AI may just create the illusion of good credit decisions

Let’s apply some natural intelligence to the concept of artificial intelligence. AI has been conflated with big data, machine learning and neural networks. AI is also second only to blockchain technology as the most overused and overhyped term referring to technologies that are taking over banking and finance, particularly in credit decisions.

Three years ago, it was fashionable to just nod your head when a company founder or a conference panelist stated that AI and fintech will disrupt the banks; some entrepreneurs even went so far as to state that banks were already obsolete.

The founders believed then as they do now that AI is the main component of the disruption. They believe that the technology will make loan officers obsolete (probably true) and lending more consistent and efficient, and safer. The reality is AI will make lending more consistent and efficient; however, it remains to be seen if it will make lending safer.

Artificial intelligence
Artificial intelligence (AI), data mining, expert system software, genetic programming, machine learning, deep learning, neural networks and another modern computer technologies concepts. Brain representing artificial intelligence with printed circuit board (PCB) design.

In other words, AI has yet to prove that it is more capable than humans in avoiding both safety and soundness and consumer protection pitfalls related to credit decisions. Indeed, humans will still be involved at key steps in the process.

This is because AI relies on data sets to help produce a credit decision outcome. This is as it should be. But a handful of banks will basically be attracting and serving the same general demographic profiles and populations (think Wells Fargo and Bank of America) and therefore using standardized data sets to build their AI systems. How will regulators ever know if the AI algorithms are performing in a nonbiased way? Humans are the programmers of the algorithms, and therefore human biases and tendencies cannot but leak into the overall decision process.

As training data is provided and defined for AI algorithms, “machine learning” is supposed to create distance between the technologists and the machine, and in turn between the humans running the credit policies of the bank and the decisions being made. The quote “We don’t know how it makes its decisions” may become increasingly common.

With data being anonymized and lending institutions starting to incorporate industrywide performance and underwriting data pools into their AI models, credit decisions will coalesce around the same decisions being made across all lending institutions. On the surface, this might sound reasonable and even acceptable. However, from a systemic-risk point of view, this type of coalescing-complacency outcome will only hide any underlying problems that may be building up.

When the problems become known, usually during a recession, it will be too late and every lender will rush to adjust their algorithms to take into account the most recent “never happened before” crisis.

There are terms from science which describe this phenomenon very succinctly. From chaos theory, the analogous term is “attractors” — pockets of stability that eventually tip into disorder. From biology, the term “punctuated equilibria” describes how in a biological setting the underlying DNA can be changing while the phenotype (in this case, credit decisions) don’t change until they hit a tipping point and then change in very short order.

We know the saying “bad data in means bad data out.” AI should help to solve that challenge as it more accurately identifies the “bad” or not useful elements. However, the challenge with AI may not be with “bad data” but rather a lack of necessary data as the economic environment changes.

To this end, neural networks, which are self-learning and so complex that the humans who create them are unable to describe them, also present a number of problems. The foremost problem is: If you don’t know how the decision is made, you cannot be confident that the decision is being made correctly. Yes, you can judge by credit performance. But when a lender runs afoul of a regulation, the regulators won’t accept “We just don’t know how it works” as an excuse. The ability to audit these decisions will become a major challenge in the future, and not just for lending. It will also be an issue with autonomous cars, medicine and space travel. There is a lot on the line to getting this right.

All closed systems will tend toward unstable states unless some form of randomness is inserted. For neural networks, it will be necessary to insert random mutations of the underlying data so as to allow the system to receive small tweaks which, over time, will make the algorithms stronger. When the performance data feeds back into the algorithm, the neural connections will be strengthened or weakened depending on what is being optimized or minimized. These mutations can possibly cost something in terms of economic loss. However, if the loss is significantly less than the cost savings of the new technology as well as avoiding any dramatic systemic credit crisis, then it should be chalked up to the cost of doing business.

For reprint and licensing requests for this article, click here.
Digital banking Online banking Lending Artificial intelligence
MORE FROM AMERICAN BANKER