BankThink

Don’t ditch disparate impact

It’s clear from early academic research that new underwriting methods can sharply reduce the unfair bias in credit decisions and improve financial inclusion.

But as the proliferation of data sources and machine learning technologies continues to advance, it’s important to remain vigilant.

Fortunately, the lending industry is one of the only sectors in the digital economy where federal legislation and regulation has already anticipated the possibility of “digital discrimination,” and established clear requirements for regulated firms to address it.

The existing regulatory framework implementing the Equal Credit Opportunity Act requires that companies test the outcomes of lending efforts for potential discrimination, even if the discrimination was clearly unintended. If a lender finds that an algorithm or the use of data in a certain model leads to discriminatory outcomes on protected classes (race, national origin, religion, age, sex, etc.), they must seek to utilize a different method.

As use of alternative data grows, the industry should work to maintain this “disparate impact” framework in fair lending regulation and in the implementation of ECOA.

It doesn’t just protect borrowers; it protects innovation. And it particularly protects the very use of alternative data that is expanding access to credit and lowering loan pricing for people of color and other borrowers poorly served by traditional underwriting.

It also is a model for regulation that can help government watchdogs better keep pace with emerging technologies, such as the so-called outcomes-based regulation.

This disparate impact approach works under ECOA because it provides a baseline and an academically rigorous method for rooting out bias in lending. It also means that concerns about algorithmic discrimination based on personal anecdotes — such as differing credit limits offered to spouses in the same household — will get fairly scrutinized (along with a statistically significant data set) under the microscope of ECOA’s disparate impact testing regime.

This helps to protect borrowers and provides a benchmark for lending programs.

Recently, there have been proposed revisions to disparate impact in the context of fair housing laws that could turn away from the outcomes-based approach in favor of a framework more focused on specific model inputs.

While well intended, such a shift away from focusing on lending outcomes to inputs could end up harming borrowers by restricting the very innovation that is providing lower prices to more borrowers, and expanding access to better credit products.

The fact is, multiple data sources on the proposed “permissible list” — while each are individually acceptable — could be combined, once inside an algorithm, into a multivariable stand-in for a protected class. This can result in discriminatory outcomes in lending.

It could also prove difficult for regulators to keep up. Imagine the game of whack-a-mole as federal and state regulators attempt to evaluate and maintain an ever-expanding list of legal and illegal data sources. Further, imagine adding the new AI modeling techniques that would need to be assessed for the “not permissible” list.

Finally, proposals to abandon the long-standing disparate impact testing for lenders could unnecessarily chill innovation in new techniques or data sources that may produce better products, more inclusion and less unfair disparity for future borrowers.

Federal Reserve research has found that traditional credit scoring methods, such as FICO, leave black and Hispanic borrowers excluded far too often from mainstream prime credit options.

In fact, traditional credit scores classify more than three times as many blacks (53%) and almost two times as many Hispanics (30%) as whites (16%) into the lowest two deciles of credit scores.

By contrast, the early returns in academic literature on the positive impact of alternative data, AI and machine learning in lending are very promising. Alternative data approaches have shown to reduce racial bias, expand credit access and lower pricing to those previously excluded.

A National Bureau of Economic Research study found that algorithmic approaches to mortgage lending reduced bias by 40% when compared to traditional FICO underwriting and face-to-face biases.

Further, researchers at the Federal Reserve Bank of Philadelphia studied volumes of data from LendingClub and concluded about 40% of loans were made in 10% of the country where banks shut down branches as the highest rates.

“This analysis points to the possibility that fintech lenders can provide credit in areas that may be underserved by traditional banks,” the report said.

A second study by the Philadelphia Fed also found that consumers paid smaller interest rate spreads on loans from LendingClub when compared to traditional lenders, indicating the benefits of fintech lending for borrowers.

The Consumer Financial Protection Bureau has found similar results as well since its establishment under the Dodd-Frank Act.

These positive results are early proof that the regulations protecting borrowers from both disparate treatment and disparate impact by lenders have created a strong yet nimble framework to prevent unintentional “algorithmic redlining,” while allowing financial firms the freedom to innovate.

Regulators should ensure that any changes to the disparate impact framework do not reduce protections for consumers or unintentionally limit the emerging data use that innovations are delivering in the form of better products to previously underserved or excluded communities.

For reprint and licensing requests for this article, click here.
Fair Housing Act Racial bias Gender discrimination Lending Alternative lending Data modeling FICO HUD
MORE FROM AMERICAN BANKER