Since his appointment on October 12, 2021, Consumer Financial Protection Bureau (“CFPB“or the”desk”) Director Rohit Chopra has embarked on an aggressive campaign to identify and punish discriminatory practices in the financial services industry. While increased regulatory scrutiny of Fair Lending compliance is nothing new for lenders, the Bureau’s March 16, 2022 blog post titled “Cracking Down on Discrimination in the Financial Industry” (the “UDAAP blog post”) and accompanying press release announcing “changes to its oversight operations to better protect families and communities from unlawful discrimination, including in situations where fair lending laws may not apply” signals an extension of regulation that all financial institutions should be aware of.
Financial institutions should also be aware of the CFPB’s recent focus on using artificial intelligence to engage in what they call “Algorithmic Redlining” or “Robotic Discrimination”, and the latest Office comments on the use of “black box” underwriting tools. and compliance with the Equal Credit Opportunity Act (“ECOA”) in their May 26, 2022 press release titled “CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms” (the “Black box press release”), and the Circular on the financial protection of consumers 2022-03 (“Circular 2022-03”) to meet “adverse action reporting requirements in connection with credit decisions based on complex algorithms”.
Expanded monitoring of unlawful discrimination as an unfair practice
In the UDAAP blog, the Office announced: “[in] As part of reviewing banks and other businesses for compliance with consumer protection rules, the CFPB will examine discriminatory behavior that violates the federal prohibition against unfair practices. The CFPB will closely examine the decision-making of financial institutions in advertising, pricing and other areas to ensure that companies appropriately test and eliminate unlawful discrimination. By expanding the scope of what is considered “unfair,” the Bureau says it can review virtually any activity in the consumer credit process and initiate enforcement action against a financial institution for discriminatory practices. in connection with its unfair, deceptive or abusive acts or practices (“UDAAP“) authority. An act or practice is unfair when: (1) it causes or is likely to cause substantial harm to consumers; (2) the harm is not reasonably avoidable by consumers; and (3) the harm is not offset by compensating benefits to consumers or competition.
The CFPB has also published an updated UDAAP section of its examination manual (the “Exam manual”) that financial institutions should consider carefully. The updated exam manual notes that “[c]umers can be harmed by discrimination, whether intentional or not. Discrimination can be unfair in cases where the conduct may also be covered by the ECOA, as well as in cases where the ECOA does not apply. “CFPB will examine discrimination in all consumer credit markets, including credit, services, collections, consumer reporting, payments, remittances and deposits. CFPB examiners will require supervised companies to show their discriminatory risk and outcome assessment processes, including documentation of customer demographics and the impact of products and fees on different demographic groups. The CFPB will review how companies test and monitor their decision-making processes for unfair discrimination, as well as discrimination under the ECOA. »
This expanded oversight also extends to marketing activities. Commenting in the UDAAP blog, Bureau enforcement and oversight staff said: “[c]Some targeted advertising and marketing, based on machine learning models, can harm consumers and harm competition. Consumer advocates, investigative journalists, and academics have shown how consumer data collection and surveillance powers complex algorithms that can target very specific consumer demographics to exploit perceived vulnerabilities and reinforce structural inequalities. We will take a close look at companies’ reliance on automated decision-making models and any potential discriminatory outcomes. »
Robo-Discrimination or Algorithmic Redlining
The CFPB describes “Robo-Discrimination” or “Algorithmic Redlining” as the practice of applying artificial intelligence and other technologies to a financial institution’s underwriting process to achieve a discriminatory outcome, regardless of facial neutrality. of this underwriting process. CFPB Director Chopra speaking at the announcement of the Trustmark National Bank settlement commented,
[w]We will also closely monitor digital redlining, disguised by so-called neutral algorithms, which may reinforce long-existing biases. . . While number-crunching machines may seem capable of removing human bias from the equation, that is not what is happening. The results of academic studies and news reports raise serious questions about algorithmic bias. . . Too many families fell victim to the robot signing scandals of the last crisis, and we must not allow robot discrimination to proliferate in a new crisis. I am pleased that the CFPB continues to contribute to the government-wide mission to eradicate all forms of redlining, including algorithmic redlining.
Then, on May 26, 2022, the Bureau issued the Black Box press release as well as Circular 2022-03. Commenting on the release of Circular 2022-03, CFPB Director Rohit Chopra said, “Companies are not exempt from their legal responsibilities when letting a black box model make lending decisions. . . The law gives every applicant the right to an accurate explanation if their credit application has been denied, and that right is not diminished simply because a company uses a complex algorithm that it does not understand. The Bureau states in the press release: “Law-abiding financial companies have a long history of using advanced computer methods in their credit decision-making processes, and they have been able to provide the rationale for their credit decisions. However, some creditors may make credit decisions based on the results of complex algorithms, sometimes referred to as “black box” models. The reasoning behind some of the outputs of these models may be unknown to model users, including model developers. With such models, adverse action notices that meet ECOA requirements may not be possible.(emphasis added)
Unfortunately, neither the Black Box press release nor Circular 2022-03 provide further details or concrete examples of the types of automated underwriting tools that can meet the standard of being too complex to use without infringing. the Equal Credit Opportunity ActECOA”), and which types may be acceptable. In fact, to date, the Bureau has not provided clear guidance as to whether the complex algorithmic underwriting systems they believe could be discriminatory.
Conclusion
Financial institutions should re-examine their compliance management systems in light of the expanded UDAAP standard and focus on the potential discriminatory impact of using artificial intelligence across their operations. Financial institutions should have a clearly defined process for assessing the risks and potential discriminatory outcomes of all activities (including marketing), documenting customer demographics, and assessing the impact of products and fees on different groups demographics. Financial institutions too small for direct consideration by the CFPB should nevertheless consider these developments and take steps to ensure compliance in light of the civil remedies (including class action liability) contained in the ECOA. Not only are CFPB rules, regulations and interpretations often adopted by other prudential regulators, but the Bureau’s enforcement actions are often the result of consumer complaints. Financial institutions should also closely examine the variables contained in any automated underwriting tool and carefully review existing language used in adverse action notices issued in connection with credit denials. While it seems unlikely that the Bureau will target many of the “off-the-shelf” underwriting programs that many community banks and credit unions rely on, financial institutions should also review any agreements with these service providers. services to ensure that they are required to provide accurate descriptions of the reasons for any adverse action.