Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On February 12, the House Financial Services Committee’s Task Force on Artificial Intelligence (AI) held a hearing entitled “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.” As previously covered by InfoBytes, the Committee created the task force to determine how to use AI in the financial services industry and examine issues surrounding algorithms, digital identities, and combating fraud. According to the Committee’s memorandum regarding the hearing, AI’s key technology is machine learning (ML)—“a process that may rely on pre-set rules to solve problems (also known as algorithms) without” or with only limited involvement of humans. Witnesses largely from the fields of computer science and AI delved into AI and ML at the hearing, discussing how human biases can be perpetuated in algorithms using historical data as input and how to best ensure fairness and accuracy. It was agreed that fairness has many different definitions that must be considered when creating algorithms. Witnesses provided testimony that when striving for fairness for one protected class, there may necessarily be tradeoffs resulting in less fairness to another protected class. Among other things, committee members questioned whether it is possible to formulate an algorithm that guarantees fairness and were urged not to focus too much on algorithms, but to also consider the data—where it came from, its quality and appropriateness—as potentially flawed data that could likely result in flawed outputs.
On October 24, the Commodity Futures Trading Commission (CFTC) announced that LabCFTC will operate as an independent operating office of the agency, reporting directly to the chair of the CFTC. As previously covered by InfoBytes, LabCFTC was established in 2017 as an initiative to engage innovators in the financial technology industry and promote responsible fintech innovation. According to the CFTC, the change reflects the importance the agency places on examining the value of innovation within the financial marketplace and making the agency accessible to fintech innovators. The CFTC also released the Artificial Intelligence in Financial Markets primer to provide an “overview of how AI is applied in financial markets” as well as resources for market participants, consumers, and the public. The primer is part of a LabCFTC series on fintech innovation. (Previous InfoBytes coverage here.)
On August 2, FDIC Chairman Jelena McWilliams spoke before the Financial Conduct Authority’s 2019 Global AML and Financial Crime TechSprint in Washington, D.C. on the importance of promoting innovation within the banking industry and ramping up efforts to help banks embrace new technologies. McWilliams noted that she is “impatient for transformation,” especially in areas that would assist banks—particularly community banks—in eliminating regulatory uncertainty, adopting new technologies, managing risks, or partnering with fintech startups to improve regulatory compliance in areas such as Bank Secrecy Act/anti-money laundering rules. McWilliams discussed the FDIC’s new office of innovation (FDiTech), which was created to support these goals. In particular, McWilliams indicated that the FDIC would support collaboration with developers, institutions, and regulators to pilot new products and services, with the goal of publishing the results of these pilots to facilitate understanding of what worked, what did not, and methods of improvement going forward. According to McWilliams, “[b]y promoting these developments and encouraging our FDIC-supervised institutions to voluntarily adopt a more advanced technological footing, we can help foster the transformation of the community banking sector. In turn, the institutions we supervise can reach greater efficiency with products and services that are more attractive to consumers.”
Agencies encourage financial institutions to explore innovative industry approaches to BSA/AML compliance
On December 3, the Financial Crimes Enforcement Network (FinCEN) released a joint statement along with federal banking agencies—the Federal Reserve Board, FDIC, NCUA, and OCC (together, the “agencies”)—to encourage banks and credit unions to explore innovative approaches such as artificial intelligence, digital identity technologies, and internal financial intelligence units to combat money laundering, terrorist financing, and other illicit financial threats when safeguarding the financial system. According to the agencies, private sector innovation and the adoption of new technologies can enhance the effectiveness and efficiency of Bank Secrecy Act/anti-money laundering (BSA/AML) compliance programs. Moreover, new innovations and technologies can also enhance transaction monitoring systems. Specifically, the agencies urged banks to test innovative programs to explore the use of artificial intelligence. However, the agencies emphasized that while feedback on innovative programs may be provided, the “pilot programs in and of themselves should not subject banks to supervisory criticism even if the pilot programs ultimately prove unsuccessful. Likewise, pilot programs that expose gaps in a BSA/AML compliance program will not necessarily result in supervisory action with respect to that program.” The joint statement further specifies that the agencies will be willing to grant exceptive relief from BSA regulatory requirements to facilitate pilot programs, “provided that banks maintain the overall effectiveness of their BSA/AML compliance programs.” However, banks that maintain effective compliance programs but choose not to innovate will not be penalized or criticized.
According to Treasury Under Secretary for Terrorism and Financial Intelligence Sigal Mandelker, “[a]s money launderers and other illicit actors constantly evolve their tactics, we want the compliance community to likewise adapt their efforts to counter these threats,” pointing to the recent use of innovative technologies to identify and report illicit financial activity related to both Iran and North Korea.
As previously covered by InfoBytes, earlier in October the agencies provided guidance on resource sharing between banks and credit unions in order to more efficiently and effectively manage their BSA/AML obligations.
On November 13, Federal Reserve Governor Lael Brainard spoke at the “Fintech and New Financial Landscape” conference hosted by the Federal Reserve Bank of Philadelphia to discuss the potential implications associated with artificial intelligence (AI) innovation and advise regulators to remain diligent in their approach to understand and regulate the use of AI by financial institutions when augmenting or replacing traditional financial processes. Brainard’s prepared remarks emphasize the benefits and potential risks to bank safety and consumer protection that new AI applications pose. Noting, however, that many AI tools are proprietary and may be shielded from close scrutiny, Brainard suggested that existing regulations and supervisory guidance such as the Federal Reserve Board’s guidance on model risk management and vendor risk management could prove helpful in this space, which requires strong controls. Among other things, Brainard discussed the use of AI models to make credit decisions, and noted the risk of “opacity and explainability” challenges, which would make it difficult to explain how consumer credit decisions were determined and could “make it harder for consumers to improve their credit score by changing their behavior.” However, Brainard noted that the “AI community is responding with important advances in developing ‘explainable’ AI tools with a focus on expanding consumer access to credit.”
Brainard also commented that “[r]egulation and supervision need to be thoughtfully designed so that they ensure risks are appropriately mitigated but do not stand in the way of responsible innovations that might expand access and convenience for consumers and small businesses or bring greater efficiency, risk detection, and accuracy.” Moreover, supervisory guidance to firms must be read in the context of the “relative risk,” and the “level of scrutiny should be commensurate with the potential risk posed by the approach, tool, model, or process used.”
At the same conference, FDIC Chairman Jelena McWilliams also discussed the use of innovation to expand banking access to more consumers, including lower transaction costs and increases in credit availability, but emphasized that millions of “unbanked or underbanked” U.S. households do not experience these technological benefits. McWilliams stated that “[i]t will be up to institutions to leverage technology and develop products to reach these consumers.” McWilliams also discussed the FDIC’s planned Office of Innovation, which will, among other things, evaluate ways to support community banks with limited resources for fintech research and development and explore policy changes to encourage more innovation, particularly “in the areas of identity management, data quality and integrity, and data usage or analysis.” Additionally, McWilliams stated that advances in technology and data analytics will present opportunities for managing risk and change the process in which regulators handle oversight in areas such as Bank Secrecy Act/anti-money laundering compliance and consumer privacy.
- Daniel R. Alonso to discuss "The international compliance situation and new challenges" at the World Compliance Association Covid Compliance Conference
- Benjamin W. Hutten to discuss "Understanding OFAC sanctions" at a NAFCU webinar
- Garylene D. Javier to discuss "Navigating workplace culture in 2020" at the DC Bar Conference