BankThink

AI can help banks make better decisions, but it doesn't remove bias

More and more, algorithms are managing our lives.

Sometimes we are not even aware of it. When we use Facebook, the news that is curated and served up to us is courtesy of an algorithm. When we select the next show to watch on Netflix, an algorithm decides what shows will be highlighted for us to choose from.

Business decisions, too, are now being influenced by algorithms — for instance, what insurance premium we will be asked to pay or the resumes we see when we present a new job opportunity to a job board online.

So many things that we now do are shaped by the results of computer algorithms. Part of the reason for this increasing reliance on computers to crunch the data and make decisions for us is that we believe it will reduce the influence of our human bias in any given decision. However, the latest research suggests that rather than reining them in, algorithms maybe be worsening the impact of our biases.

To take a step back, when has bias caused problems in the past? Take insurance premiums as an example. Insurance companies have historically charged higher premiums to African Americans for life insurance. Why? On average, African Americans have, statistically speaking, lived shorter lives, and so an early payout was estimated as being more likely relative to other groups. However, is this a reasonable outcome for any given middle class African American family? Hardly. But the question now is whether data science can produce a better and more objective reflection of reality.

Data scientist Cathy O’Neil has looked at this question in the context of job hiring at a company like Fox News. Typically, the creators of an algorithm will look to build a model that identifies the ideal characteristics of a job candidate for each job type. In looking for the ideal characteristics, the program will look at the history of successful candidates and pull out the common aspects.

Yet as O’Neil discusses, 20 years of data from hiring at Fox News would probably tell you that neither women nor African Americans have been very successful in obtaining a position at the company. Anyone training the algorithm to identify potential successful candidates would likely eliminate both women and African Americans from the candidate pool following this model. O’Neil sees computer science of this sort as a technology — rather than as a science. It’s a tool for improving accuracy and efficiency, not for telling the truth.

This problem has also come up in the development of facial recognition software. Hundreds, maybe even thousands, of pictures of a face are used to train an algorithm to learn how to recognize a face. When the person in the photo is a white man, the software is right 99 percent of the time. But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.

These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition. In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women. Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement — and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.

As banks increasingly move toward using algorithms and artificial intelligence in many parts of their business — from the front office to risk and compliance desks — examining bias in the data is becoming more important. And regulators are not going to be satisfied with the output of an algorithm if they cannot understand what is underlying it.

Banks have recently been pushing into AI for trade surveillance and financial crimes compliance for two key motivating factors. First, there is the opportunity to reduce the amount of time spent by analysts combing through false or benign alerts in both areas. Second, banks are using the technology to identify risks proactively through predictive analytics.

A bank that is reducing the number of suspicious activity alerts that its analysts must investigate will need to convince a regulator that any steep drop in the number of alerts is well-founded.

Who is teaching the machine what type of alerts to ignore and set aside? A human being will review alerts that he or she considers to be false or benign based on his/her own experience and insights and will “teach” the algorithm to recognize and set those types of cases aside going forward. The bank will need to be able to explain this process to regulators — and to demonstrate that it is based on facts and actionable insights identified by the algorithm acting in the place of a human being. Banks cannot assume that regulators will trust the technology o its face so they would be well advised to provide examples of the AI training, encourage a deep dive into its workings and to provide real-time demonstrations of the software and how it works.

More concerning potentially is the case of a bank proactively identifying a risky customer, through patterns of activity analyzed in the case of similar customers who have committed some type of fraud or negative activity. When a bank decides to curtail transactions with a certain type of customer, it is a little like arresting a terrorist before they blow up a building. Suspicions are based on predictions of a customer’s likelihood to take an action, like committing a fraud on the bank. While it may be a significant improvement on making decisions through pure human subjectivity, banks still need to be concerned with the prevention of bias on the part of the programmer influencing the algorithm’s suspicions.

A human review should be conducted of the predictions made by the machine learning algorithm to ensure fairness and protection against bias decisions. This may seem odd when we are leveraging algorithm to reduce human intervention but a sense of fairness and equity is not something that an algorithm can necessarily be trained in.

AI is a powerful tool that can potentially play a useful role in bank decision-making and analysis. However, let’s not make the mistake of thinking that this will remove bias from the equation. Behind every algorithm there is a human being, whose insights, expertise and bias lay the groundwork for the computer’s decision-making framework.

Banks need to make sure they build safety provisions into the use of AI so that the technology is used to limit bias, not accentuate it.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning AML
MORE FROM AMERICAN BANKER