Skip to main content
Menu Icon Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • OCC discusses use of AI

    On May 13, OCC Deputy Comptroller for Operational Risk Policy Kevin Greenfield testified before the House Financial Services Committee Task Force on Artificial Intelligence (AI) discussing banks' use of AI and innovation in technology services. Among other things, Greenfield addressed the OCC’s approach to innovation and supervisory expectations, as well as the agency’s ongoing efforts to update its technological framework to support its bank supervision mandate. According to Greenfield’s written testimony, the OCC “recognizes the paramount importance of protecting sensitive data and consumer privacy, particularly given the use of consumer data and expanded data sets in some AI applications.” He noted that many banks use AI technologies and are investing in AI research and applications to automate, augment, or replicate human analysis and decision-making tasks. Therefore, the agency “is continuing to update supervisory guidance, examination programs and examiner skills to respond to AI’s growing use.” Greenfield also pointed out that the agency follows a risk-based supervision model focused on safe, sound, and fair banking practices, as well as compliance with laws and regulations, including fair lending and other consumer protection requirements. This risk-based approach includes developing supervisory strategies based upon an individual bank’s risk profile and examiners’ review of new, modified, or expanded products and services. Greenfield further noted that “the OCC is focused on educating examiners on a wide range of AI uses and risks including risks associates with third parties, information security and resilience, compliance, BSA, credit underwriting, and fair lending and data governance, as part of training courses and other educational resources.” According to Greenfield’s oral statement, “banks need effective risk management and controls for model validation and explainability, data management, privacy, and security regardless of whether a bank develops AI tools internally or purchases through a third party.”

    Bank Regulatory Federal Issues OCC House Financial Services Committee Privacy/Cyber Risk & Data Security Artificial Intelligence Third-Party Risk Management Fintech

    Share page with AddThis
  • DOJ and EEOC address AI employment decision disability discrimination

    Federal Issues

    On May 12, the DOJ and the Equal Employment Opportunity Commission (EEOC) released a technical assistance document addressing disability discrimination when using artificial intelligence (AI) and other software tools to make employment decisions. According to the announcement, the DOJ’s guidance document, Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring, provides a broad overview of rights and responsibilities in plain language, and, among other things, (i) provides examples of technological tools used by employers; (ii) clarifies that employers must consider the impact on different disabilities when designing or choosing technological tools; (iii) describes employers’ obligations under the ADA when using algorithmic decision-making tools; and (iv) provides information for employees on actions they may take if they believe they have experienced discrimination. The EEOC also released a technical assistance document, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, which focuses on preventing discrimination against job seekers and employees with disabilities.

    Federal Issues DOJ EEOC Artificial Intelligence Americans with Disabilities Act Discrimination

    Share page with AddThis
  • FHFA releases AI/ML risk management guidance for GSEs

    Federal Issues

    On February 10, FHFA released Advisory Bulletin (AB) 2022-02 to Fannie Mae and Freddie Mac (GSEs) on managing risks related to the use of artificial intelligence and machine learning (AI/ML). Recognizing that while the use of AI/ML has rapidly grown among financial institutions to support a wide range of functions, including customer engagement, risk analysis, credit decision-making, fraud detection, and information security, FHFA warned that AI/ML may also expose a financial institution to heightened compliance, financial, operational, and model risk. In releasing AB 2022-02 (the first publicly released guidance by a U.S. financial regulator that specifically focuses on AI/ML risk management), FHFA advised that the GSEs should adopt a risk-based, flexible approach to AI/ML risk management that should also be able “to accommodate changes in the adoption, development, implementation, and use of AI/ML.” Diversity and inclusion (D&I) should also factor into the GSEs’ AI/ML processes, stated a letter released the same day from FHFA’s Office of Minority and Women Inclusion, which outlined its expectations for the GSEs “to embed D&I considerations throughout all uses of AI/ML” and “address explicit and implicit biases to ensure equity in AI/ML recommendations.” The letter also emphasized the distinction between D&I and fairness and equity, explaining that D&I “requires additional deliberation because it goes beyond the equity considerations of the impact of the use of AI/ML and requires an assessment of the tools, mechanisms, and applications that may be used in the development of the systems and processes that incorporate AI/ML.”

    Additionally, AB 2022-02 outlined four areas of heightened risk in the use of AI/ML: (i) model risk related to bias that may lead to discriminatory or unfair outcomes (includes “black box risk” where a “lack of interpretability, explainability, and transparency” may exist); (ii) data risk, including concerns related to the accuracy and quality of datasets, bias in data selection, security of data from manipulation, and unfamiliar data sources; (iii) operational risks related to information security and IT infrastructure, among other things; and (iv) regulatory and compliance risks concerning compliance with consumer protection, fair lending, and privacy laws. FHFA provided several key control considerations and encouraged the GSEs to strengthen their existing risk management frameworks where heightened risks are present due to the use of AI/ML.

    Federal Issues FHFA Fintech Artificial Intelligence Mortgages GSEs Risk Management Fannie Mae Freddie Mac Diversity

    Share page with AddThis
  • FTC comments on CFPB’s big tech payments inquiry

    Federal Issues

    On December 21, FTC Chair Lina M. Khan submitted a comment in response to the CFPB's Notice and Request for Comment inquiring about the CFPB’s October orders issued to six large U.S. technology companies seeking information and data on their payment system business practices. (Covered by InfoBytes here.) In her comment, Khan noted her three areas of concern that she hopes can help to inform the CFPB’s inquiry, including that big tech companies’ (i) “participation in payments and financial services could enable them to entrench and extend their market positions and privileged access to data and AI techniques in potentially anticompetitive and exploitative ways”; (ii) “use of algorithmic decision-making in financial services amplifies concerns of discrimination, bias, and opacity”; and (iii) “increasingly commingled roles as payment and authentication providers could concentrate risk and create single points of failure.” Khan noted that “[t]he potential risks created by Big Tech’s expansion into payments and financial services are notable and demand close scrutiny,” and stated that she will be monitoring “this inquiry and the findings it produces to help inform the FTC’s work.”

    Federal Issues FTC CFPB Payments Artificial Intelligence Discrimination

    Share page with AddThis
  • CFPB asks tech workers to report AI lending discrimination

    Federal Issues

    On December 15, the CFPB released a blog post calling on technology workers to report potential violations of federal consumer financial laws, including related to artificial intelligence (AI), as part of the Bureau’s efforts to adapt to the evolving financial landscape. According to the Bureau, AI has become a part of nearly every consumer financial market, creating the potential for intentional and unintentional discrimination within the decision-making process. As an example, while algorithmic mortgage underwriting has the potential to reduce discrimination, the Bureau warned that “researchers found discriminatory effects of these new technologies, as Black and Hispanic families have been more likely to be denied a mortgage compared to similarly situated white families.” The Bureau asked tech workers, including engineers, data scientists, and others with detailed knowledge of these algorithms and technologies, to report potential discrimination or other misconduct to the Bureau to help ensure these technologies are not being misused or abused. “Tech workers may have entered the field to change the world for the better, but then discover their work being misused or abused for unlawful ends,” CFPB Chief Technologist Erie Meyer stated. The Bureau updated its whistleblower webpage to provide additional information on the whistleblower submission process, and noted that fair lending experts and technologists will review submitted whistleblower tips. The webpage also describes the type of information the Bureau is seeking, and outlines whistleblower protections.

    Federal Issues CFPB Artificial Intelligence Fintech Whistleblower Fair Lending Consumer Finance

    Share page with AddThis
  • Senate launches Financial Innovation Caucus

    Federal Issues

    On May 25, Senators Cynthia Lummis (R-WY) and Kyrsten Sinema (D-AZ), along with several other bipartisan Senators, announced the creation of the U.S. Senate Financial Innovation Caucus to highlight “responsible innovation in the United States financial system, and how financial technologies can improve markets to be more inclusive, safe and prosperous for all Americans.” The Senate will use the caucus “to discuss domestic and global financial technology issues, and to launch legislation to empower innovators, protect consumers and guide regulators, while driving U.S. financial leadership on the international stage.” The press release notes that the caucus is timely because of the “growing regulatory focus on digital assets,” which includes efforts by the Federal Reserve Board, SEC, and other foreign governments to create digital currencies. The caucus will focus on critical issues pertaining to the future of banking and U.S. competitiveness on the global stage, including: (i) distributed ledger technology (blockchain); (ii) artificial intelligence and machine learning; (iii) data management; (iv) consumer protection; (v) anti-money laundering; (vi) faster payments; (vii) central bank digital currencies; and (viii) financial inclusion and opportunity for all.

    Federal Issues Fintech U.S. Senate Digital Assets Artificial Intelligence Finance Federal Reserve SEC Bank Regulatory Central Bank Digital Currency

    Share page with AddThis
  • FDIC chairman addresses the importance of innovation

    Fintech

    On May 11, FDIC Chairman Jelena McWilliams spoke at the Federalist Society Conference about the Dodd-Frank Act in a post Covid-19 environment and the future of financial regulation. Among other topics, McWilliams emphasized the importance of promoting innovation through inclusion, resilience, amplification, and protecting the future of the banking sector. McWilliams pointed out that “alternative data and AI can be especially important for small businesses, such as sole proprietorships and smaller companies owned by women and minorities, which often do not have a long credit history” and that “these novel measures of creditworthiness, like income streams, can provide critical access to capital” that otherwise may not be possible to access.  McWilliams also discussed an interagency request for information announced by the FDIC and other regulators in March (covered by InfoBytes here), which seeks input on financial institutions’ use of AI and asks whether additional regulatory clarity may be helpful. McWilliams also added that rapid prototyping helps initiate effective reporting of more granular data for banks. Additionally, McWilliams addressed agency’s efforts to expand fintech partnerships through several initiatives intended to facilitate cooperation between fintech groups and banks to promote accessibility to new customers and offer new products. Concerning the ability to confront the direct cost of developing and deploying technology at any one institution, McWilliams added that “there are things that we can do to foster innovation across all banks and to reduce the regulatory cost of innovation.”

    Fintech FDIC Covid-19 Dodd-Frank Artificial Intelligence Bank Regulatory

    Share page with AddThis
  • FDIC announces FDItech virtual ‘Office Hours’

    Fintech

    On April 29, the FDIC’s technology lab, FDiTech, announced that it will host a series of virtual “office hours” to hear from a variety of stakeholders in the business of banking concerning current and evolving technological innovations. The office hours will be hour-long, one-on-one sessions that will provide insight into the contributions that innovation has made in reshaping banks and enabling regulators to manage their oversight efficiently. According to the FDIC, “FDiTech seeks to evaluate and promote the adoption of innovative and transformative technologies in the financial services sector and to improve the efficiency, effectiveness, and stability of U.S. banking operations, services, and products; to support access to financial institutions, products, and services; and to better serve consumers.” FDiTech’s goal is to contribute to the transformation of banking by supporting “the adoption of technological innovations through increased collaboration with market participants.” In the first series of office hour sessions, the FDIC and FDiTech are seeking participants’ outlook on artificial intelligence and machine learning related to: (i) automation of back office processes; (ii) Bank Secrecy Act/Anti-Money Laundering compliance; (iii) credit underwriting decisions; and (iv) cybersecurity.

    FDiTech anticipates hosting approximately 15 one-hour sessions each quarter. Interested parties seeking to participate in these sessions must contact the FDIC by May 24.

    Fintech FDiTech Artificial Intelligence Bank Secrecy Act FDIC Bank Regulatory

    Share page with AddThis
  • House Financial Services Committee reauthorizes fintech, AI task forces

    Federal Issues

    On April 30, the House Financial Services Committee announced the reauthorization of the Task Forces on Financial Technology and Artificial Intelligence. According to Chairwoman Maxine Waters (D-CA), the “Task Forces will investigate whether these technologies are serving the needs of consumers, investors, small businesses, and the American public, which is needed especially as we recover from the COVID-19 pandemic.” Representative Stephen Lynch (D-MA) will chair the Task Force on Financial Technology, which will continue to monitor the opportunities and challenges posed by fintech applications for lending, payments, and money management and offer insight on how Congress can ensure Americans’ data and privacy is protected. Representative Bill Foster (D-IL) will chair the Task Force on Artificial Intelligence, which will examine how AI is impacting the way Americans operate in the marketplace, how to think about identity security, and how to interact with financial institutions. The task forces will also examine issues related to algorithms, digital identities, and combatting fraud. As previously covered by InfoBytes, these task forces were set to expire in December 2019.

    House GOP members also released a report that highlights efforts of the Task Forces on Financial Technology and on Artificial Intelligence and includes recommendations on how to utilize innovation. According to the report, the two “key takeaways” are that “Congress must (1) promote greater financial inclusion and expanded access to financial services, and (2) ensure that the federal government does not hinder the United States’ role as a global leader in financial services innovation.” The report also includes recommendations for policy regulators and Congress to: (i) decide how to assist innovation, especially in the private sector; (ii) use the power of data and machine learning to fight fraud, streamline compliance, and make better underwriting decisions; and (iii) “keep up with technology to better protect consumers.”

    Federal Issues House Financial Services Committee Fintech Artificial Intelligence

    Share page with AddThis
  • FTC provides AI guidance

    Federal Issues

    On April 19, the FTC’s Bureau of Consumer Protection wrote a blog post identifying lessons learned to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has addressed the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. The FTC stated that it has used its expertise with these laws to: (i) report on big data analytics and machine learning; (ii) conduct a hearing on algorithms, AI, and predictive analytics; and (iii) issue business guidance on AI and algorithms. To assist companies navigating AI, the FTC has provided the following guidance:

    • Start with the right foundation. From the beginning, companies should consider ways to enhance data sets, design models to account for data gaps, and confine where or how models are used. The FTC advised that if a “data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.” 
    • Watch out for discriminatory outcomes. It is vital for companies to test algorithms—both prior to use and periodically after that—to prevent discrimination based on race, gender, or other protected classes.
    • Embrace transparency and independence. Companies should consider how to embrace transparency and independence, such as “by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening. . . data or source code to outside inspection.”
    • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, company “statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence.”
    • Data transparency. In the FTC guidance on AI last year, as previously covered by InfoBytes, an advisory warned companies to be careful about how they get the data that powers their models.
    • Do more good than harm. Companies are warned that if their models cause “more harm than good—that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair.”
    • Importance of accountability. The FTC warns of the importance of being transparent and independent and cautions companies to hold themselves accountable or the FTC may do it for them.

    Federal Issues Big Data FTC Artificial Intelligence FTC Act FCRA ECOA Consumer Protection Fintech

    Share page with AddThis

Pages