Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Brainard weighs benefits and risks of using AI in financial services industry

Federal Issues Federal Reserve Artificial Intelligence Fintech Bank Regulatory

Federal Issues

On January 12, Federal Reserve Governor Lael Brainard spoke at the AI Academic Symposium hosted by the Fed’s Board about the increased use of artificial intelligence (AI) in the financial services industry. Brainard reflected that since she first shared early observations on the use of AI in 2018 (covered by InfoBytes here), the Fed has been exploring ways to better understand the use of AI, as well as how banking regulators can best manage risk through supervision while supporting the responsible use of AI and providing equitable outcomes. “Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve,” Brainard noted, adding that the Fed is currently collaborating with the other federal banking agencies on a potential request for information on the risk management of AI applications in financial services.

Emphasizing the “wide ranging” scope of AI applications, Brainard commented that financial services firms have been using AI for operational risk management, customer-facing applications, and fraud prevention and detection. Brainard also suggested that machine learning-based fraud detection tools could also have the potential to increase the detection of suspicious activity “with greater accuracy and speed,” while potentially enabling firms to respond in real time. Brainard also acknowledged the potential of AI to improve accuracy and fairness of credit decisions and improve overall credit availability.

However, Brainard also discussed AI challenges, including the “black box problem” that can arise with complex machine learning models that “operate at a level of complexity” which is difficult to fully understand. This lack of model transparency is a central challenge she noted, stressing that financial services firms must understand the basis on which a machine learning model determines creditworthiness, as well as the potential for AI models to “reflect or amplify bias.” With respect to safety and soundness, Brainard stated that “bank management needs to be able to rely on models’ predictions and classifications to manage risk. They need to have confidence that a model used for crucial tasks such as anticipating liquidity needs or trading opportunities is robust and will not suddenly become erratic.” She added that “regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve.”