'Fintechs in the crosshairs': Lenders deploy fairness testing software

Octane Lending, an online lender based in New York, has a challenge when it comes to loan decisions. The company helps people buy power sports vehicles, such as motorcycles and all-terrain vehicles. Such loans tend to get reported as auto loans or secured customer loans, not specifically as motorcycle loans, so finding comparable records is difficult. 

So the company has built its own, AI-based credit score and underwriting model. It also uses the FICO Auto 9 score. 

Recently, to confirm that its credit models don't inadvertently reflect bias or have a disparate impact on disadvantaged communities, the $1.5 billion-asset Octane began deploying fairness testing software. Having come from a large British bank, Chief Risk Officer Ray Duggins is tuned into the need for fair lending and anti-discrimination efforts, which are closely regulated in Europe. He was formerly chief risk officer at GE Capital and at Standard Chartered Bank’s consumer bank in Singapore. 

"I've never built a model where I intended to discriminate against anyone," Duggins said. "But you always have to go back and test to make sure you're not doing something inadvertently."

Octane is not alone. Its fairness vendor, Los Angeles-based FairPlay, developer of what it calls "fairness-as-a-service" for AI-based loan software, says 10 financial services customers, including two large banks, are using its software. 

FairPlay this week raised $10 million in a Series A round led by Nyca Partners, with participation from Cross River Digital Ventures, Third Prime, Fin Capital, TTV, Nevcaut Ventures, Financial Venture Studio and Jonathan Weiner, a venture partner at Oak HC/FT. This follows FairPlay's $4.5 million seed round in November.

Why now

When FairPlay launched in 2020, making automated lending fair was not a burning issue.

"When we started, fairness was on the agenda, especially when you were talking to risk people, but it wasn't really a priority; it certainly wasn't at the top of the list," said Kareem Saleh, FairPlay's CEO. "It was viewed by lenders as something to pay attention to, to not run afoul of the law and keep the government out of their business." 

More recently, bank regulators have expressed concern that lenders using AI might try to skirt fair lending laws. Software might find patterns in data on past loans that perpetuate existing bias or find a proxy for a prohibited loan criterion, like ZIP code, that ends up informing loan decisions. The effect could be digital redlining, which is illegal.

"I think the chief concern on the part of the industry as well as regulators is that not enough stakeholders fully understand how algorithmic lending works," said Manny Alvarez, founding principal of BridgeCounsel Strategies. He was formerly commissioner of California's Department of Financial Protection and Innovation, general counsel at Affirm and an enforcement attorney at the CFPB.

"That's detrimental because it inhibits productive regulation," he said. "If you don't know how an algorithm is operating, it is difficult to fairly assess the lending outcomes of a particular model as a regulator. And by the same token, if you don't know how your models are operating as a lender, it's going to be difficult to understand if you have an unintended proxy for some prohibited basis, or to understand whether or where in your portfolio you have disparate outcomes that can be optimized."

Saleh said he sees a perception among regulators that the algorithms are going to discriminate and that "the fintech players who came of age in the last several years were insufficiently attentive to this stuff. So fintech is in the crosshairs and there's a belief that the algorithms left to their own devices are going to do harm either to the consumers or to the safety and soundness of the financial system."

Also, over the course of the last two or three quarters, some lenders have come to see fairness checks as an opportunity for competitive advantage, by finding borrowers others are overlooking, he said. 

"Companies themselves recognize that they can't have underwriting for the digital age and compliance for the stone age," Saleh said. 

How FairPlay works

FairPlay's software has two core components. The first is bias detection in credit models, looking for signs of any algorithmic behavior that could cause an undesired result. The other takes a second look at loan applicants who have been declined, taking into account additional information that might show that a person with a low credit score or thin credit file is still creditworthy. 

Saleh calls the second look process "fairness through awareness."

"I like to say that for 50 years in banking, we tried with good reason to achieve fairness or blindness, this idea that the only color we see is green," he said. "We just look at variables that are neutral and objective from the bureaus or from some other source and make our decision based on that." 

The trouble is, some populations are not well represented in credit bureau data. 

FairPlay provides additional information about Black applicants, female applicants, people of color and other disadvantaged groups.

For instance, added data about female borrowers could help a lender recognize that a person with inconsistent income might have taken a career break but is still creditworthy. 

Added data about Black applicants could help lenders understand an applicant who does not have a bank account. 

"A lot of Black Americans live in bank deserts, and as a result, they do most of their banking either at check- cashing stores or using apps like Venmo or Cash App," Saleh noted. "None of that data is reported back into the bureaus and they're not considered formally to have deposit accounts." 

Using the second-look software, one customer increased its overall approval rate by 10% and increased its approval rate of black applicants by 16%, he said. 

"What we're finding is that 25% to 33% of the time, the highest-scoring folks from minority backgrounds that get declined would have performed at least as well as the riskiest folks that those lenders are currently approving," Saleh said. 

FairPlay's software is "a highly technical tool that is easy for the lay person to use," Alvarez said. "And I think we need more of those solutions for the industry, as well as regulators."

How Octane uses it

Octane Lending has been building its own loan decision models since 2016; it's now in its third generation. 

When the company first started out, it was attracting near-prime and subprime customers. The vehicle manufacturers would pay discounts to companies that would do near-prime and subprime loans because nobody else would do them, Duggins said. 

Today, about 60% of its loans are to prime customers. 

"We need to function on all credit spectrums right now," Duggins said. 

Octane's custom credit score is AI-based. It uses nontraditional credit bureau data about how people pay their phone bills, how long they've worked or lived in different places.

"All that builds up some indication of stability of the individual," Duggins said.

Octane has been using FairPlay's software to look for bias in its models for several months "to validate and confirm that what we're doing is correct," Duggins said. 

Duggins, who has been in the banking industry for more than three decades, has seen thinking about the evolution of fair-lending technology.

"There would've been no FairPlay back in 1983 or 1985, nobody ever worried about those things," he said. "To see the evolution of where we are today and the sophistication is really quite amazing."

The double-edged sword of automated lending

Alvarez acknowledges that the many online lenders and traditional banks using automated lending to extend credit in underserved communities have to be looked at with skepticism. 

"Algorithmic lending is a tool and it is possible to use it incorrectly or to the detriment of certain populations," Alvarez said. "It is also possible to use it to the benefit of certain populations. There is reason to be skeptical as well as optimistic. But I also think it's dangerous to meet this moment with a throw-the-baby-out-with-the-bathwater attitude."

Alvarez also warned that AI-based underwriting models can drift, especially those based on machine learning and consuming ever-larger amounts of data. 

"Model drift is a real phenomenon, and you need human intervention to observe that drift and course correct when necessary," Alvarez said. 

Automation is useful and inevitable, he noted.

"But human intervention is likely something that will always be necessary in order to ensure that lending decisions are made fairly and responsibly," he said.

For reprint and licensing requests for this article, click here.
Online banking Technology
MORE FROM AMERICAN BANKER