"Regulators must make clear that mere acknowledgement of a less-discriminatory [consumer lending] model is not, alone, evidence of past wrongdoing," Yolanda D. McGill of Zest AI writes as part of her call for policies that promote continuous improvement of underwriting systems.
On March 30, 2023, Patrice Ficklin, head of the Consumer Financial Protection Bureau's Office of Fair Lending, publicly clarified for the first time that consumer lenders have an affirmative duty to monitor, refine and update lending models in order to ensure that there are no less-discriminatory models available. This statement is critical because pursuit of less-discriminatory alternative (LDA) underwriting models does not happen consistently enough for a variety of reasons, including that LDA searches have historically been cumbersome to pursue and may result in less accurate models. Fortunately for millions of Americans historically underserved by our financial system, new artificial intelligence and machine learning tools can facilitate more effective searches that yield multiple less-discriminatory and equally accurate alternative models quickly and efficiently.
Against this backdrop, Ficklin's clarification seems like a simple and clear affirmation of the Equal Credit Opportunity Act and its implementing regulation, Regulation B. Taken in conjunction with the bureau's warning to lenders against using technologies in ways that hamper compliance, the bureau's fair lending clarification could ultimately prove to be a watershed moment in advancing the use of AI in consumer finance to enhance fairness and financial inclusion. For this moment to be realized, however, regulators must take additional bold action, and more is needed to ensure that American consumers benefit from proper application of a law intended to increase fairness, inclusion and ultimately access to credit.
First, the bureau and other regulators should explicitly recognize that LDA search using AI tools is an advantageous application of the technology for financial services, given AI's ability to rapidly compare multiple models in searching for alternatives that are more fair and less discriminatory. Under the equal credit act, all lenders are required to assess whether current lending models have a discriminatory impact on protected classes, then ascertain whether there are LDAs available that would satisfy their legitimate business objectives. Advances in fair lending analytics are making these searches more accessible and efficient for all lenders, with significant benefits for consumers.
Recent research published by the nonprofit FinRegLab highlighted the potential advantages of using AI tools in complying with LDA search requirements (as well as the risks of using AI without adequate attention to fairness). Advanced, explainable AI technologies for credit underwriting models that include robust searches for LDAs as part of the models' fair lending testing, foster fairness and inclusion in financial services.
Second, as my colleague argued in these pages last year, regulators must make clear that mere acknowledgement of a less-discriminatory model is not, alone, evidence of past wrongdoing. Today, whether due to a lack of sophistication in developing and testing alternative models, inertia or apathy, or fear that acknowledging an LDA may somehow indicate wrongdoing with respect to legacy models, many lenders fail to pursue robust LDA searches. Instead, lenders should be encouraged to perform robust LDA searches and improve models rather than stick with the status quo out of fear of incurring liability.
And finally, as we explained in our December 2020 comment letter, the bureau should issue public guidance regarding LDA regulatory expectations, including how the bureau assesses the robustness of LDA search techniques and methodologies. Clarity as to the material metrics or factors that lenders should consider in LDA search and deployment processes, and options for balancing fairness with accuracy would accelerate alignment with the bureau's express expectations. Coherent, compliant application of AI technology holds real promise for American consumers and the financial services providers who serve them.
The decision to rejoin the CEO and chairman roles comes roughly two months after regulators removed an asset cap that had stunted the San Francisco-based company's growth for seven years.
In New York City and elsewhere, financial institutions are taking stock of their office-building security protocols following the killing of four people, including an investment bank executive, in Midtown Manhattan. Security experts say that layers of protection are essential in all office buildings.
In a new lawsuit, a former Flagstar compliance officer says Alessandro DiNello fired him for investigating his suspected misconduct. In one lurid example, the former CEO allegedly revealed sensitive company information as a junior employee sat on his lap.
The Federal Deposit Insurance Corp. withheld bonuses from former FDIC Chair Martin Gruenberg and four senior officials, whose names were redacted from the report, as part of "corrective action" for allegations of misconduct.