AI use carries bias risk for financial regulators
The House Financial Services Artificial Intelligence Task Force recently started to examine the opportunities and challenges presented by the growing use of artificial intelligence in the financial services industry.
The AI task force should broaden its scope to also explore whether AI and algorithms used by financial regulators help or hurt its citizenry.
Financial regulators view AI as a useful tool to increase efficiency. For instance, regulators have used machine learning to detect fishy text in corporate filings, identify money-laundering networks and discover tax cheats.
However, scant attention is given to the potential for bias in supervisory algorithms used by regulators.
First, consider how bias can enter into the anti-money-laundering reports that banks file with the Financial Crimes Enforcement Network. Structural barriers including linguistic or socioeconomic hurdles can inadvertently create bias within data.
About 14% of Hispanic households in the U.S. are “unbanked” and mostly rely on cash and prepaid debit cards, according to the Federal Deposit Insurance Corp.’s latest survey of unbanked and underbanked households. Another 17% of African-American households were also unbanked. In some cases, these adults do not have a current, government-issued photo ID.
As a result, bank AML algorithms may flag frequent but innocuous prepaid card transactions for Hispanic customers and banks could file disproportionate reports with Fincen on “suspicious” IDs for African-Americans.
Fincen has found explicit bias in the AML data reported to the agency and reminded banks that an AML “filing should not be based on a person’s ethnicity,” such as being “of Middle Eastern” descent. Despite the potential for bias, Fincen’s data is widely available to law enforcement agencies, and also employs algorithms to screen filings to identify reports that merit further review.
Missing or incomplete data can also introduce bias into the algorithms.
Financial regulators use the Consumer Financial Protection Bureau’s consumer complaints database to decide which cases to pursue. Over 80% of complaints in 2017 were submitted by consumers directly via the agency’s online platform.
However, individuals without access to the internet are disproportionately elderly, have lower educational levels and low incomes. Scammers who prey on this group may escape justice because the CFPB is inundated with complaints filed online by wealthier, younger consumers.
Finally, consider a securities regulator’s use of algorithms to identify high-risk financial advisers and firms. The Financial Industry Regulatory Authority developed a predictive model using criteria, including complaints and terminations, to generate a numeric risk score for financial advisers.
Importantly, the study found that female advisers engage in less costly misconduct and have a lower propensity toward repeat offenses.
Finra’s algorithm could lead to the adverse and inaccurate consequence of an over-representation of women being classified as high-risk advisers.
Additionally, Finra recently proposed that firms classified as high-risk could improve their classification via a “one-time opportunity to reduce staffing levels.” This “opportunity” may further compound the lopsided terminations women face in the financial services industry.
Regulators need to review their own supervisory algorithms for unintended discriminatory outcomes. Otherwise, they may unwittingly play a role in perpetuating inequalities.
Whether financial regulators use of emerging technology will become a force for good or ill may depend on how the House Financial Services AI task force tackles the issue.