Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

FHFA releases AI/ML risk management guidance for GSEs

Federal Issues FHFA Fintech Artificial Intelligence Mortgages GSEs Risk Management Fannie Mae Freddie Mac Diversity

Federal Issues

On February 10, FHFA released Advisory Bulletin (AB) 2022-02 to Fannie Mae and Freddie Mac (GSEs) on managing risks related to the use of artificial intelligence and machine learning (AI/ML). Recognizing that while the use of AI/ML has rapidly grown among financial institutions to support a wide range of functions, including customer engagement, risk analysis, credit decision-making, fraud detection, and information security, FHFA warned that AI/ML may also expose a financial institution to heightened compliance, financial, operational, and model risk. In releasing AB 2022-02 (the first publicly released guidance by a U.S. financial regulator that specifically focuses on AI/ML risk management), FHFA advised that the GSEs should adopt a risk-based, flexible approach to AI/ML risk management that should also be able “to accommodate changes in the adoption, development, implementation, and use of AI/ML.” Diversity and inclusion (D&I) should also factor into the GSEs’ AI/ML processes, stated a letter released the same day from FHFA’s Office of Minority and Women Inclusion, which outlined its expectations for the GSEs “to embed D&I considerations throughout all uses of AI/ML” and “address explicit and implicit biases to ensure equity in AI/ML recommendations.” The letter also emphasized the distinction between D&I and fairness and equity, explaining that D&I “requires additional deliberation because it goes beyond the equity considerations of the impact of the use of AI/ML and requires an assessment of the tools, mechanisms, and applications that may be used in the development of the systems and processes that incorporate AI/ML.”

Additionally, AB 2022-02 outlined four areas of heightened risk in the use of AI/ML: (i) model risk related to bias that may lead to discriminatory or unfair outcomes (includes “black box risk” where a “lack of interpretability, explainability, and transparency” may exist); (ii) data risk, including concerns related to the accuracy and quality of datasets, bias in data selection, security of data from manipulation, and unfamiliar data sources; (iii) operational risks related to information security and IT infrastructure, among other things; and (iv) regulatory and compliance risks concerning compliance with consumer protection, fair lending, and privacy laws. FHFA provided several key control considerations and encouraged the GSEs to strengthen their existing risk management frameworks where heightened risks are present due to the use of AI/ML.