Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On July 7, the CFPB released a blog post discussing the use of artificial intelligence (AI) and machine learning (ML), addressing the regulatory uncertainty that accompanies their use, and encouraging stakeholders to use the Bureau’s innovation programs to address these issues. The blog post notes that “AI has the potential to expand credit access by enabling lenders to evaluate the creditworthiness of some of the millions of consumers who are unscorable using traditional underwriting techniques,” but using AI may create or amplify risks, including unlawful discrimination, lack of transparency, privacy concerns, and inaccurate predictions.
The blog post discusses how using AI/ML models in credit underwriting may raise compliance concerns with ECOA and FCRA provisions that require creditors to issue adverse action notices detailing the main reasons for the denial, particularly because AI/ML decisions can be “based on complex interrelationships.” Recognizing this, the Bureau explains that there is flexibility in the current regulatory framework “that can be compatible with AI algorithms.” As an example, citing to the Official Interpretation to Regulation B, the blog post notes that “a creditor may disclose a reason for a denial even if the relationship of that disclosed factor to predicting creditworthiness may be unclear to the applicant,” which would allow for a creditor to use AI/ML models where the variables and key reasons are known, but the relationship between them is not intuitive. Additionally, neither ECOA nor Regulation B require the use of a specific list of reasons, allowing creditors flexibility when providing reasons that reflect alternative data sources.
In order to address the continued regulatory uncertainty, the blog post encourages stakeholders to use the Trial Disclosure, No-Action Letter, and Compliance Assistance Sandbox programs offered by the Bureau (covered by InfoBytes here) to take advantage of AI/ML’s potential benefits. The blog post mentions three specific areas in which the Bureau is particularly interested in exploring: (i) “the methodologies for determining the principal reasons for an adverse action”; (iii) “the accuracy of explainability methods, particularly as applied to deep learning and other complex ensemble models”; and (iii) the conveyance of principal reasons “in a manner that accurately reflects the factors used in the model and is understandable to consumers.”
- Kathryn L. Ryan and Jedd R. Bellman to discuss “Risk and compliance management: Are you covered?” at a Mortgage Bankers Association webinar
- Melissa Klimkiewicz and Daniel A. Bellovin to discuss “Things to know about flood insurance” at a NAFCU webinar
- Hank Asbill to discuss “Ethical issues at sentencing” at the 31st Annual National Seminar on Federal Sentencing
- Max Bonici will moderate a panel on “Enforcement risk and other regulatory and compliance issues related to crypto and digital assets” at the American Bar Association’s 2022 Annual Meeting
- John R. Coleman to provide a “CFPB Update” at MBA’s 2022 Regulatory Compliance Conference
- Amanda R. Lawrence to discuss “The shifting data privacy and data protection landscape” at MBA’s 2022 Regulatory Compliance Conference
- Jeffrey P. Naimon to provide “An update on key fair lending cases and the CRA and UDAAP rules” at MBA’s 2022 Regulatory Compliance Conference
- Benjamin W. Hutten to discuss “Fundamentals of financial crime compliance” at the Practicing Law Institute
- Benjamin W. Hutten to discuss “Ongoing CDD: Operational considerations” at NAFCU’s Regulatory Compliance & BSA Seminar
- James C. Chou to discuss ransomware at NAFCU’s Regulatory Compliance & BSA seminar