BankThink

Fair-lending laws haven’t caught up to AI

A version of this post was previously published on the Brookings website as part of a series on artificial intelligence. It has been edited for length.

Banks have been in the business of deciding who is eligible for credit for centuries. But in the age of artificial intelligence, machine learning and big data, digital technologies have the potential to transform credit allocation in positive as well as negative directions.

Given the mix of possible societal ramifications, policymakers must consider what practices are and are not permissible and what legal and regulatory structures are necessary to protect consumers against unfair or discriminatory lending practices. The country’s lending laws will have to be updated to keep pace with these technological developments, as they are adopted more widely by banks and other financial companies.

When artificial intelligence is able to use a machine learning algorithm to incorporate big data sets, it can find empirical relationships between new factors and consumer behavior. Thus, AI coupled with ML and big data allow for far larger types of data to be factored into a credit calculation.

Many of these factors show up as statistically significant in whether you are likely to pay back a loan or not. A recent Federal Deposit Insurance Corp. working paper demonstrated that a handful of simple digital footprint variables could outperform the traditional credit score model in predicting who would pay back a loan. Specifically, they were examining people shopping online at Wayfair and applying for credit to complete an online purchase.

The researchers identified five key variables: borrower type of computer (Mac or PC), type of device (phone, tablet, PC), time of day you applied for credit (borrowing at 3 a.m. is not a good sign), your email domain (Gmail is a better risk than Hotmail) and whether a shopper’s name is part of their email address (names are a good sign). Crucially, all of these are simple, available immediately and at no cost to the lender — as opposed to, say, pulling your credit score.

An AI algorithm could easily replicate these findings and ML could probably add to it. But each of the variables is correlated with one or more protected classes. It would probably be illegal for a bank to consider using any of these in the U.S. — or if not clearly illegal, then certainly in a gray area.

Incorporating new data raises a bunch of ethical questions. Should a bank be able to lend at a lower interest rate to a Mac user, if, in general, Mac users are better credit risks than PC users, even controlling for other factors like income and age? Does your decision change if you know that Mac users are disproportionately white? Is there anything inherently racial about using a Mac? If the same data showed differences among beauty products targeted specifically to African-American women, would your opinion change?

Answering these questions requires human judgment as well as legal expertise on what constitutes acceptable disparate impact. A machine devoid of the history of race or of the agreed-upon exceptions would never be able to independently recreate the current system that allows credit scores — which are correlated with race — to be permitted, while Mac versus PC would be avoided.

Policymakers need to rethink our existing anti-discriminatory framework to incorporate the new challenges of AI, ML and big data. A critical element is transparency for borrowers and lenders to understand how AI operates. In fact, the existing system has a safeguard already in place that itself is going to be tested by this technology: the right to know why you are denied credit.

When you are denied credit, federal law requires a lender to tell you why. This is a reasonable policy on several fronts. First, it provides the consumer necessary information to try and improve their chances to receive credit in the future. Second, it creates a record of decision to help ensure against illegal discrimination. If a lender systematically denied people of a certain race or gender based on false pretext, forcing them to provide that pretext allows regulators, consumers and consumer advocates the information necessary to pursue legal action to stop discrimination.

But this legal requirement creates two serious problems for financial AI applications. First, the AI has to be able to provide an explanation. Some machine learning algorithms can arrive at decisions without leaving a trail as to why. Simply programming a binary yes/no credit decision is insufficient. In order for the algorithm to be compliant, it must be able to identify the precise reason or reasons that a credit decision was made. This is an added level of complexity for AI that might delay adoption.

The second problem is what happens when the rationale for the decision is unusual. For example, one of the largest drivers of personal bankruptcy and default is divorce. An AI algorithm may be able to go through a person’s bank records and web search history, and determine with some reasonable accuracy if they are being unfaithful. Given that is a leading cause of divorce, it would probably be a relevant factor in a risk-based pricing regime, and a good reason to deny credit.

Is it acceptable for a bank to deny an application for credit because a machine suspects infidelity? If so, the next step would be whether it is right for the bank to inform the consumer directly that is the reason why. Imagine if the bank sent a letter to the consumer’s home with that finding. The use of AI to determine credit coupled with the requirement that written notice be given with the rationale for credit denial raises a host of privacy concerns.

If it is not acceptable, then who determines what acceptable grounds are? While marital status is a protected class under the Equal Credit Opportunity Act — you cannot discriminate against someone for being single — it is not clear that lenders concerned with changes to marital status would be prohibited from using that information. As AI determines new metrics that interact with existing protected classes, it will be incumbent upon financial regulators, courts and eventually lawmakers to set new policy rules that govern this brave new world.

The core principle of risk-based lending will be challenged as machines uncover new features of risk. Some of these are predictive in ways that are hard to imagine. Some are predictive in ways that are difficult to disclose. And some are repetitions resulting from a world in which bias and discrimination remain. Unlocking the benefits from this data revolution could help us escape the cycle using credit as the powerful tool for opportunity that it is. However, the existing 1970s-era legal framework is going to need a reboot to allow us to get there.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Consumer lending Policymaking
MORE FROM AMERICAN BANKER