Already a member? Current customers are kindly asked to reset their passwords. Simply select LOGIN, then RESET PASSWORD.
BankThink

Machine learning’s promises, pitfalls

Advancements in financial technology can be credited with bringing about increased efficiency, reduced costs and expanded access to the financial system. But an overreliance on technology can also have the opposite effect.

The idea that advanced technological and mathematical systems can serve to perpetuate broader societal inequities, known as algorithmic bias, has become a hot topic: from Silicon Valley to the halls of Congress, to the presidential campaign trails.

Credit score developers face these questions every day as they seek to accurately predict consumers’ credit risk while giving more Americans the chance to gain access to credit. Advanced statistical tools like machine learning and artificial intelligence certainly have great potential to aid in these objectives, but things can also go wrong when it isn’t used appropriately.

The information that is furnished to the credit bureaus, and used as an input into a credit score, has been regulated to avoid information that could encourage discrimination. Only information about a consumer’s credit and financial behavior is involved in the calculation of all mainstream credit scores.

Factors like race, gender and wealth are never taken into account. In fact, the widespread adoption of credit scoring in consumer lending was a response to an earlier era when reliance on more subjective factors allowed for bias to enter into the lending process. That adoption was at the forefront of an unprecedented democratization of credit.

Indeed, credit score developers have been using early machine learning techniques for more than 30 years. Since then, developers have honed those skills to safely and effectively leverage its potential to build credit scores that are both predictive of credit risk and trustworthy to lenders and consumers.

Still, the hard questions about the use of techniques like AI and ML go beyond whether it includes factors that could directly lead to bias. And while there is a lot to marvel about ML’s effectiveness, the technology is only as smart as the data it consumes.

It is not a replacement for new sources of data that can be predictive of credit risk but are largely not included in the traditional credit bureau files, which drive mainstream consumers’ credit scores.

What’s more, overreliance on “ML-only” models — those upon which human constraints have not been applied — can actually obscure risks or shortchange consumers by picking up harmful biases and behaving counterintuitively.

Testing has found that such models can underestimate default risk or deny consumers improvements to their credit scores as they lower their debt. One such test model resulted in nearly 10% of consumer records receiving a lower score after debt was paid off.

Here’s why that’s a problem: imagine a consumer who visits a financial advisor saying they managed to pay off $10,000 in debt across two credit cards. The consumer then opens an app on their phone to check their credit score, only to learn their score has dropped.

Consumers and lenders expect a credit risk model to be highly predictive as well as transparent and understandable. Such a counterintuitive result would be unacceptable for a credit score used in actual lending decisions. But the greater risk is what can happen when appropriate human constraints aren’t applied to machine learning models.

The best approach is a middle ground in which AI and ML are used among the tools in a developer’s research and development process. This allows providers to leverage machine learning’s power to find patterns in large data sets as human credit scoring experts remain in control.

Such experts can continue to impose constraints to ensure the score meets the expectations of consumers and lenders using traditional methodology that has been refined and trusted for decades.

In credit scoring, the promise of machine learning is ultimately the promise that more creditworthy Americans can access credit. That potential lies in testing and deploying new alternative data sources that allow developers to more reliably score people, including those who can’t be scored using credit bureau data alone. In this context, ML can be a valuable tool to help sort through these alternative data sets and build more inclusive models.

In particular, recent strides have been made via the development of new models that incorporate telecom and cable bill payment data not typically found in traditional credit files. This also includes new models that allow consumers to link their checking, savings or money market accounts to refine the precision of their credit score based on proven indicators of sound financial behavior.

Artificial intelligence and ML are incredibly powerful tools, but they must be used responsibly. The right applications are helping to power the next generation of innovation in consumer lending — with human experts keeping a hand on the steering wheel at all times.

For reprint and licensing requests for this article, click here.