What the White House's blueprint for an AI bill of rights means for banks

The White House has published an AI Bill of Rights that instructs banks and other companies on the kinds of consumer protections they need to build into their artificial intelligence-based programs.

The blueprint, issued Tuesday, lays out five rights consumers should have as companies deploy AI: protection from unsafe or ineffective systems; no discrimination by algorithms; data privacy; notification when algorithmic systems are being used; the ability to opt out; and access to customer service provided by human beings.

The bill of rights is not law and it's not enforceable, but it does reveal how the Biden administration wants consumer rights to be protected as companies like banks use AI.

"You can think of it as a preamble to future regulatory action," said Jacob Metcalf, program director of AI on the Ground for the nonprofit research group Data and Society. The White House Office of Science and Technology Policy, which produced the document, doesn't write laws, but it does set strategic priorities that other government agencies will follow, he explained.  

"You can really think of it as a tone-setting document," he said. 

Banks' and fintechs' use of AI has been called into question many times by regulators and consumer advocates, especially their use of AI in lending. Consumer Financial Protection Bureau Director Rohit Chopra warned recently that the reliance on artificial intelligence in loan decisions could lead to illegal discrimination. Banks' use of AI in facial recognition has also been singled  out, and their use of AI in hiring has been questioned. This is the tip of the iceberg: Banks and fintechs use AI in many other areas including fraud detection, cybersecurity and virtual assistants.

The bill of rights specifically focuses on financial services a few times. For instance, an appendix listing the types of systems the rights should cover includes "financial system algorithms such as loan allocation algorithms, financial system access determination algorithms, credit scoring systems, insurance algorithms including risk assessments, automated interest rate determinations, and financial algorithms that apply penalties (e.g., that can garnish wages or withhold tax returns)."

Some in the financial industry are skeptical about how effective this bill of rights will be. Others worry that some of the rights will be too hard to implement.

"At least it sends a signal to the industry: Hey, we will be watching," said Theodora Lau, founder of Unconventional Ventures. "That said, however, we are a bit late to the party, especially when even the Vatican has weighed in on the subject, not to mention the EU. More concerning is that this is nonbinding with no enforcement measures, like a toothless tiger. It will be up to lawmakers to propose new bills. And even when anything is passed, having laws is one thing, reinforcing them is another."

Lau noted that the EU has proposed legislation that governs the use of AI in special high-risk areas, including loan applications. 

"Will we be able to follow suit? And if so, when? Or will we be subjected to the whims of the political winds?" she said.

The intent of the blueprint, setting some guardrails around the use of AI systems to ensure that credit decisions are not final and can be contested, is reasonable, said Marc Stein, founder and CEO of Underwrite.ai. 

"But I have serious reservations as to how this could be implemented in the financial services space," he said.

Application to lending

One of the most controversial places banks use artificial intelligence is in loan decisions. Regulators and consumer advocates have warned lenders that they still have to comply with fair-lending laws when they use AI.

The federal government is starting to require companies to prove that the AI software they're using isn't discriminatory, Metcalf said. 

"We've existed in a regulatory environment where you can rely on claims of magic without actually having to put your money where your mouth is and provide an assessment about how your system actually works," he said. "You can get away with simply providing hypotheticals. I see the federal government moving towards a put up or shut up environment. If you're going to provide a product that operates in these regulated areas, including finance and banking, you have to affirmatively provide an assessment that shows that you operate within the bounds of the law."

But Stein argued that there are practical difficulties to applying the blueprint's directives to lending, such as the clause that consumers should be able to opt out and have access to a person who can quickly consider and remedy problems.

"If an automated interest rate determination is made based upon FICO tiers, how would one apply this?" Stein said. "What function would the human be called upon to perform? The decision isn't made by a black-box algorithm, and it was set up by human underwriters to run automatically. What exactly would a customer appeal? That using FICO scores is unfair? That may be a valid argument to make, but it has nothing to do with AI and can't be addressed by this blueprint."

Stein noted that lenders have long-standing rules that address discrimination and set the liability for bad behavior on the lender. 

"If a lender discriminates or misleads, they should be punished," he said. "If an automated system is used in that violation, then the lender that deployed the automated system is liable. It's certainly not a reasonable defense to argue that you didn't realize that your system broke the law."

AI in hiring

The use of AI in hiring decisions has also been controversial, due to the fear that the software could pick up signals in resumes or videos that discriminate against already disadvantaged groups.

"There's all kinds of public, well-known examples of machine learning making really discriminatory and frankly irrelevant decisions, and the rest of us are expected to just accept on its face value that it works," Metcalf said. 

He pointed to Amazon's attempt to use its own algorithmic hiring tool to process applications for data scientists and executives. 

"They found that it gave really high scores to anybody named Chad and anybody who played lacrosse, and it gave very low scores to anyone that had 'woman' in their resume anywhere, including the head of the Women's Science Club at Harvard," Metcalf said. "So Amazon dropped the tool. They worked on it for three years and Amazon couldn't make it work."

Fraud detection

The blueprint's warning that consumers should be protected from unsafe or ineffective systems could apply to AI-based fraud detection software that is overly aggressive about flagging suspicious activity, Metcalf said. 

"You could lose access to your cash," he said. 

The challenger bank Chime ran into this problem last year when it inappropriately closed the accounts of customers due to the workings of an overzealous fraud system.

"If it happens at Saturday at 10:00 p.m., you might not get your bank account back until Monday morning," Metcalf said. "There are safety issues. The question for me, as someone who's very interested in algorithm accountability and corporate governance, is, what testing is that bank obligated to do regarding the accuracy of that prediction? Have they tested it against realistic accounts of demographic divergence? We live in a segregated society, and African Americans might have different banking behaviors than whites do. Are we in a situation where false positive fraud alerts go up on people that just have innocuous banking patterns that are common to African Americans? What obligation is the bank under to test for those scenarios?"

A bank might not know how to run such tests and may not have the resources to do so, he added. 

"I think we have to head towards the situation where that kind of testing is obligatory, where there's transparency and documentation and where federal regulators are asking those questions of the banks and telling them that they're expected to have an answer and that there's recourse," Metcalf said.

One of the most important aspects of the bill of rights is its insistence on recourse for errors, he said. 

"If an algorithm flags your bank account for fraud and it's wrong, and it happens on Saturday night, who's going to fix it for you?" Metcalf said. "Is there a customer service agent empowered to fix the computer's problem? Usually there isn't. The relationship between error and human intervention and recourse is something that bankers should be thinking about quite explicitly. If you're going to render automated decisions that can affect people's lives, then you'd better have a route by which they can get it fixed when you're wrong." 

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology Politics and policy
MORE FROM AMERICAN BANKER