BankThink

Regulators need to step up their game to stop tech-based discrimination

Slowly but surely, algorithms are taking over our lives.

From the news we see on Facebook and Twitter, to where we buy our groceries, to the movies we watch on Netflix, technology-driven processes are gaining sway over the decisions we make each day. Algorithms are also taking over our financial lives, and in the process, are helping to increase inequality and undermine the fabric of our society.

In the context of financial services, in theory, automated decision-making makes it easier and more efficient for financial institutions to offer products and services tailored to a variety of prospects and customers. Lenders can use algorithms to decide whether we can buy a home in a neighborhood with good schools, obtain a credit card that affords us greater flexibility in managing our finances or open a bank account instead of paying onerous check-cashing fees.

programming concept
a12f12ce-a08c-4f4b-88f6-f54a06366521

This sort of innovation is not inherently bad, but it can have bad consequences. For one thing, it can be used to circumvent fair-lending laws, which prohibit discrimination based on race, color, national origin, sex, age and other characteristics. In fact, it’s already happening. Online lenders and traditional lenders are using technology, statistics and data science to create underwriting models that do not incorporate these factors but produce the same results as if they did.

Imagine a bank agreed not to include skin color in its lending decisions — “We would never … !” — but included hundreds of factors in its algorithms, like what magazines one subscribes to, what brands someone frequently buys, what websites he or she visits and what music someone listens to. Those factors might very well predict a lender’s likelihood of repaying a loan — but they might also have the effect of excluding, for example, African-Americans. Imagine the computer spits out an algorithm based on Facebook “likes”: Make loans to people who listen to country, but not to people who listen to hip-hop. Because a computer selected and weighted those factors, banks claim no one’s to blame — and because the algorithm does not expressly use “black,” the model doesn’t violate the law. Unfortunately, the only ones being fooled by this model are the regulators. They lack the tools and personnel necessary to keep up.

In many cases, the laws that dictate how banks and other lenders should operate were implemented long ago when real-time information was difficult to come by and analysis was hard. While the financial industry is becoming increasingly digital, the oversight regime is stuck in a pen and paper time warp. As a result, the processes and procedures used to supervise banks are often slow, retroactive and overwhelmingly human-driven. Most analysis is done through an audit. That means regulators hire accountants and lawyers to study the policies, procedures, and process that makes decisions — “black” isn’t included in the algorithm — rather than analyze the effects or outcomes of what actually happens in the real world.

Simply put, this approach makes it difficult to identify those who are using algorithms to hide their discriminatory tracks until well after any transgressions have occurred. Regulators may never catch discriminatory algorithms because it’s a means test rather than an effects test.

"While the financial industry is becoming increasingly digital, the oversight regime is stuck in a pen and paper time warp."

But things don’t have to be this way. Technology can also provide regulators with the means to ensure compliance with fair-lending rules. By testing the algorithms being used, it is possible to move regulation’s focus from how the algorithm works to what the algorithm does. This does not mean auditing the code or the processes that created it, which would represent a challenge for even the most proficient programming minds.

Rather, in our example of lending algorithms, we could feed billions of computer-generated profiles into the algorithm and see the results. By generating profiles based on small changes to actual applicants, we can find out how specific characteristics affect the algorithm's decision-making. With those changes correlated, one can statistically test for bias. For example, perhaps if you add certain likes on Facebook — say you have liked ten hip hop musicians — it makes you less likely to get a loan. That’s potentially unfair if that type of discrimination comes up over and over again in a statistically robust fashion.

In that way, regulators can simulate how applicants would fare if the system was live. Based on who was approved and who wasn’t, they could identify biases in the results; thus, allowing them to address issues of discrimination before a real-world decision had been made.

But this is only one example of how regulators can boost their effectiveness in the digital age, albeit an easy and high-value one. Getting to that point requires a rethinking of strategies and methods. Agencies must be proactive rather than retroactive and embrace a shift from historic to real-time analysis. They must also emulate the transformation taking place in the industry they oversee, equipping themselves with what is necessary to carry out an updated mission.

From early on, the regulatory front lines have been dominated by accountants and lawyers; software engineers and product managers have largely remained in the background, ensuring that messaging systems and other technology infrastructure keeps working. Nowadays, this no longer makes sense. To ensure that those being overseen are acting appropriately, regulators must make better use of the technology and human resources available to them.

For reprint and licensing requests for this article, click here.
Digital banking Compliance Digital distribution Marketplace lending Artificial intelligence
MORE FROM AMERICAN BANKER