BankThink

To avoid bias, AI needs to 'explain' itself

Can a credit card be sexist? It’s not a question most people would have thought about before this week, but on Monday, state regulators in New York announced an investigation into claims of gender discrimination by Apple Card.

The algorithms Apple Card used to set credit limits are, it has been reported, inherently biased against women. Tech entrepreneur David Heinemeier Hansson (@DHH) claimed that the card offered him 20 times more credit than his wife, even though she had the better credit score, while Apple’s own co-founder Steve Wozniak went to Twitter with a similar story, despite he and his wife sharing bank accounts and assets.

Goldman Sachs, the New York bank that backs the Apple Card, released a statement rejecting this assertion, saying that when it comes to assessing credit, they “have not and will not make decisions based on gender.”

Goldman Sachs signage
Signage for Goldman Sachs Group Inc. is displayed at the One Raffles Link building, which houses one of the Goldman Sachs (Singapore) Pte offices, in Singapore, on Saturday, Dec. 22, 2018. Singapore has expanded a criminal probe into fund flows linked to scandal-plagued 1MDB to include Goldman Sachs Group, which helped raise money for the entity, people with knowledge of the matter said. Photographer: Nicky Loh/Bloomberg
Nicky Loh/Bloomberg

We don’t know how Apple’s algorithm came to such seemingly sexist decisions, but the company isn’t alone in its use of tech. Banks and other lenders are increasingly using machine learning technology to cut costs and boost loan applications.

And these accusations are the tip of the iceberg of a very big problem that faces artificial intelligence and goes far beyond the financial services sector. As AI is used in more and more applications across a range of industries, there is seemingly no end to the level of bias that these systems can show.

Look at what happened when Amazon tried building an AI tool to help with recruiting, only to find that the algorithm discriminated against women because it had combed through male-dominated CVs to gather its data.

The AI revolution that has swept through banks, call centers, retailers, insurers and recruiters has brought obvious bias with it — and it’s getting worse, as AI systems are increasingly able to “teach” themselves, reinforcing existing bias as their decision-making develops.

This problem is exacerbated by the investment in opaque “black box” AI systems, which cannot communicate how decisions have been made to the operator, regulator or customer. Since black box systems learn from each interaction, if they are given corrupt data, poor decision-making can rapidly accelerate, without the operators understanding why or even being aware of it.

The only solution to this is “white box” or Explainable AI. These are systems which are able to explain in easily understood language how the software operates and how decisions have been made.

This kind of transparency is key. By explaining how and why decisions are made, Explainable AI helps consumers and companies understand what they need to do to get a different outcome. In the case of financial services, that might mean telling a customer how turn a rejected mortgage application into an acceptance. With a recruitment tool, it could mean flagging why a CV was turned down to a human, who could then adapt the algorithm if it was clearly biased.

The technology helps consumers take action on one end, while also opening new business avenues for banks and other institutions by offering more suitable products.

Today’s AI systems are already making crucial decisions on loans, medical diagnoses, and even criminal risk assessment. While there’s a lot of good that can come out of this, there has to be an element of transparency to instill accountability in the decision-making.

If we ignore this, instead of finding ourselves galloping towards a bright future, we risk sleepwalking into a tech-fueled dystopia.

For reprint and licensing requests for this article, click here.
Gender discrimination Artificial intelligence Payment cards Risk Apple Goldman Sachs ISO and agent
MORE FROM AMERICAN BANKER