CFPB seeking innovation in adverse action notices when using artificial intelligence
On July 7, the CFPB released a blog post discussing the use of artificial intelligence (AI) and machine learning (ML), addressing the regulatory uncertainty that accompanies their use, and encouraging stakeholders to use the Bureau’s innovation programs to address these issues. The blog post notes that “AI has the potential to expand credit access by enabling lenders to evaluate the creditworthiness of some of the millions of consumers who are unscorable using traditional underwriting techniques,” but using AI may create or amplify risks, including unlawful discrimination, lack of transparency, privacy concerns, and inaccurate predictions.
The blog post discusses how using AI/ML models in credit underwriting may raise compliance concerns with ECOA and FCRA provisions that require creditors to issue adverse action notices detailing the main reasons for the denial, particularly because AI/ML decisions can be “based on complex interrelationships.” Recognizing this, the Bureau explains that there is flexibility in the current regulatory framework “that can be compatible with AI algorithms.” As an example, citing to the Official Interpretation to Regulation B, the blog post notes that “a creditor may disclose a reason for a denial even if the relationship of that disclosed factor to predicting creditworthiness may be unclear to the applicant,” which would allow for a creditor to use AI/ML models where the variables and key reasons are known, but the relationship between them is not intuitive. Additionally, neither ECOA nor Regulation B require the use of a specific list of reasons, allowing creditors flexibility when providing reasons that reflect alternative data sources.
In order to address the continued regulatory uncertainty, the blog post encourages stakeholders to use the Trial Disclosure, No-Action Letter, and Compliance Assistance Sandbox programs offered by the Bureau (covered by InfoBytes here) to take advantage of AI/ML’s potential benefits. The blog post mentions three specific areas in which the Bureau is particularly interested in exploring: (i) “the methodologies for determining the principal reasons for an adverse action”; (iii) “the accuracy of explainability methods, particularly as applied to deep learning and other complex ensemble models”; and (iii) the conveyance of principal reasons “in a manner that accurately reflects the factors used in the model and is understandable to consumers.”