Can AI Be Programmed to Make Fair Lending Decisions?

Editor at Large

As lenders dabble with artificial intelligence in credit decisions, debate has stirred around whether AI and even more traditional decision-making algorithms can be trusted to make sound decisions.

In an earlier article, we posited that the use of AI in underwriting machines could have unintended consequences — such as shutting disadvantaged people out of the financial system. That's because AI machines can learn on their own, are not strictly governed by rules, and they can make their own rules and assumptions.

Marc Stein, CEO of Underwrite.io, provider of an AI underwriting platform used by several alternative lenders, took issue with this claim.

"There's a flaw in this logic: the program doesn't decide on its own," he said. "The program is constrained by the same regulations as human underwriters. Racial or gender discrimination doesn't become legal because it's done by a machine. It is incumbent on the developer of the algorithm to insure that the results can't evince illegal bias. When discussing algorithmic lending with major banks, the first question they ask is, 'How does this algorithm prevent disparate impact?'"

Cathy O'Neil, author of the new book "Weapons of Math Destruction," argues that in the process of informing AI engines with new sources of information, unintended bias can emerge.

She offers as an example ZestFinance, the online installment lender.

"They brag that they score people in part based on their ability to use punctuation, capitalization and spelling, which is obviously a proxy for quality of education," said O'Neil, a former math professor. "It has nothing to do with creditworthiness. Someone who is illiterate can still pay their bills." (ZestFinance did not respond to a request for an interview. Founder and CEO Douglas Merrill has said, "If you fill in your name in all caps, you're a much higher risk.")

Though online lenders have to follow the same regulations traditional lenders do, "that doesn't mean they're actually doing it," said O'Neil. "Unless they're auditing their algorithms for fairness and showing me those audits, why shouldn't we expect something to be going wrong?" When lenders use factors like these, "instead of using somebody's past behavior, they're talking about demographics," O'Neil said. "That's when it gets discriminatory because we have a system in this country where quality of education is very correlated to class and race."

Saying No to AI, for Now

Some lenders, ranging from traditional banks to online platforms, say they have no intention of using artificial intelligence technology — computer systems performing tasks that normally require human intelligence — in loan decisions.

Tim Barber, executive vice president of credit risk management at Huntington Bank, is one.

"When we're talking about AI and machine learning — machines taking over and making their own decision — I'm not comfortable with that," he said.

He is fine with automated decisions governed by a strictly enforced set of rules and policies. The decision algorithm Huntington uses for credit decisions incorporates customer attributes within a rules-based construct; those rules are controlled by the credit and risk groups.

"There's a significant difference between AI, which to me denotes machine learning and machines moving forward on their own, versus auto-decisioning, which is using data within the context of a managed decision algorithm," he said.

Lending Club takes a similar stance.

"We don't use artificial intelligence in the sense that the machine determines its own rules and make decisions without any human interaction," said Sid Jajodia, chief investment officer at Lending Club. "Because of how regulated financial services and lending decisions are, you can't do that."

Dubious Data Sources

One objection to AI in lending is that artificial intelligence engines call for massive amounts of data to "learn" to make decisions the way humans do. There's a fear that companies will be indiscriminate about what they throw in there.

Social media data is one example — the idea that a potential borrower will be judged by her Facebook connections is anathema to many.

"We don't use any Facebook, Twitter or other social media data to score someone and determine their risk or how to price that risk," Jajodia said. "There are a ton of privacy concerns around using someone's social media feeds to try to figure out if they're a risk or not. There's very clear regulation around using proxy data for gender, age, race and other protected attributes."

Huntington also does not use data gathered from Facebook or other social media accounts in its credit decisions, partly due to regulation. For instance, if a bank denies credit, the Fair Credit Reporting Act requires it to provide the applicant with the specific reasons why the applicant was turned down.

"Anything that doesn't lend itself to a reason we could communicate to the borrower is not something we use in our automated decision algorithms," Barber said. "We would not, for example, ever say 'we didn't like your Facebook friends, therefore we won't give you a residential mortgage loan.' We might say your debt burden is too significant. That is one of the most common reasons for denying."

Lending Club does use online data to help find fraud, however.

"We live in an online channel, and online you don't know who's at the other end of the connection," Jajodia said. "So you have to use available data to triangulate and identify suspicious applications, and to request additional documents from the borrower to verify identity."

Opaque Models

A related concern is transparency. Online lenders often describe themselves as transparent, but they also deflect questions about the algorithms and technology they use in credit decisions.

"Of course they're black boxes," O'Neil said. "They're not offering to explain it, so it's a black box."

There are business reasons for playing it close to the vest, Jajodia said.

"No lender, whether it's a bank or an online lender, will ever tell you their exact algorithm or the weight of attributes or exact attributes, because that's inherently intellectual property," he said. "If I give that out, what stops another lender from copying that and marketing to customers as a competitor?"

That said, regulators can review Lending Club's algorithms "to make sure there's nothing being used that shouldn't be used," he said.

Anupam Datta, associate professor at Carnegie Mellon University, has developed a method of analyzing AI engines to demystify their inner workings, called Quantitative Input Influence. It tests an engine by submitting several loan applications, then determining the weight that the machine places on various features such as income and ZIP code.

"We treat it as a black box and we give it inputs in carefully constructed tests," Datta said. He envisions his system being used by banks to make sure they're in compliance with fair-lending laws and by regulators to make sure banks are following those laws.

Despite such efforts, O'Neil sees danger ahead in the growing use of AI and algorithms in lending decisions.

"Compared to the slew of [algorithmic] WMDs running amok, the prejudiced loan officer of yesteryear doesn't look all that bad," O'Neil stated in her book. "At the very least, the borrower could attempt to read his eyes and appeal to his humanity."

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Bank technology Analytics Compliance Consumer banking
MORE FROM AMERICAN BANKER