Four federal agencies cite dangers in AI systems' ability to discriminate

justice-department-washington-dc-357.jpg
The Justice Department's civil rights division, Consumer Financial Protection Bureau, Federal Trade Commission and Equal Employment Opportunity Commission released a joint statement reiterating their commitment to enforcing existing civil rights laws and regulations.

Four federal agencies have pledged to "vigorously enforce" anti-discrimination laws to protect against the potential dangers of artificial intelligence and automated systems.

On Tuesday, the Justice Department's civil rights division, Consumer Financial Protection Bureau, Federal Trade Commission and Equal Employment Opportunity Commission released a joint statement reiterating their commitment to enforcing existing civil rights, consumer protection and fair competition laws and regulations.

The four agencies did not issue new regulatory guidance but instead cited previous concerns about the potentially harmful uses of automated systems including AI, that rely on vast amounts of data to find patterns to perform tasks or make recommendations. The agencies said the systems have the potential to produce outcomes that result in unlawful discrimination. 

"This is an all-hands-on-deck moment, and the Justice Department will continue to work with our government partners to investigate, challenge, and combat discrimination based on automated systems," Assistant Attorney General Kristen Clarke, of the Justice Department's civil rights division, said in a press release. "As social media platforms, banks, landlords, employers, and other businesses that choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result."

Many technology companies advertise such systems as "providing insights and breakthroughs," or "increasing efficiencies and cost-savings," the agencies said, but the systems also have "the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes."

Federal Trade Commission Chair Lina M. Khan reiterated that the FTC will "vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition." She said "there is no AI exemption to the laws on the books." 

"We already see how AI tools can turbocharge fraud and automate discrimination, and we won't hesitate to use the full scope of our legal authorities to protect Americans from these threats," Khan said in a press release. "Technological advances can deliver critical innovation — but claims of innovation must not be cover for lawbreaking." 

The agencies said they are upholding core American principles of "fairness, equality, and justice," by applying existing enforcement authorities to automated systems. 

"We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems," the agencies said.  

CFPB Director Rohit Chopra has been an outspoken critic of Big Tech firms and the use of algorithms and other analytics to target specific customers with ads or content. Last year, the CFPB issued an interpretive rule that said digital marketers can no longer claim an exemption from the Consumer Financial Protection Act, and are liable under the interpretive rule for "unfair, deceptive or abusive acts or practices," known as UDAAP violations.

"Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families' financial stability," Chopra said in a press release. 

For reprint and licensing requests for this article, click here.
Regulation and compliance Politics and policy
MORE FROM AMERICAN BANKER