Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
FTC provides 2022 ECOA summary to CFPB
On February 9, the FTC announced it recently provided the CFPB with its annual summary of activities related to ECOA enforcement, focusing specifically on the Commission’s activities with respect to Regulation B. The summary discussed, among other things, the following FTC enforcement, research, and policy development initiatives:
- Last June, the FTC released a report to Congress discussing the use of artificial intelligence (AI), and warning policymakers to use caution when relying on AI to combat the spread of harmful online conduct. The report also raised concerns that AI tools can be biased, discriminatory, or inaccurate, could rely on invasive forms of surveillance, and may harm marginalized communities. (Covered by InfoBytes here.)
- The FTC continued to participate in the Interagency Task Force on Fair Lending, along with the CFPB, DOJ, HUD, and federal banking regulatory agencies. The Commission also continued its participation in the Interagency Fair Lending Methodologies Working Group to “coordinate and share information on analytical methodologies used in enforcement of and supervision for compliance with fair lending laws, including the ECOA.”
- The FTC initiated an enforcement action last April against an Illinois-based multistate auto dealer group for allegedly adding junk fees for unwanted “add-on” products to consumers’ bills and discriminating against Black consumers. In October, the FTC initiated a second action against a different auto dealer group and two of its officers for allegedly engaging in deceptive advertising and pricing practices and discriminatory and unfair financing. (Covered by InfoBytes here and here.)
- The FTC engaged in consumer and business education on fair lending issues, and reiterated that credit discrimination is illegal under federal law for banks, credit unions, mortgage companies, retailers, and companies that extend credit. The FTC also issued consumer alerts discussing enforcement actions involving racial discrimination and disparate impact, as well as agency initiatives centered around racial equity and economic equality.
Barr says AI should not create racial disparities in lending
On February 7, Federal Reserve Board Vice Chair for Supervision, Michael S. Barr, delivered remarks during the “Banking on Financial Inclusion” conference, where he warned financial institutions to make sure that using artificial intelligence (AI) and algorithms does not create racial disparities in lending decisions. Banks “should review the underlying models, such as their credit scoring and underwriting systems, as well as their marketing and loan servicing activities, just as they should for more traditional models,” Barr said, pointing to findings that show “significant and troubling disparities in lending outcomes for Black individuals and businesses relative to others.” He commented that “[w]hile research suggests that progress has been made in addressing racial discrimination in mortgage lending, regulators continue to find evidence of redlining and pricing discrimination in mortgage lending at individual institutions.” Studies have also found persistent discrimination in other markets, including auto lending and lending to Black-owned businesses. Barr further commented that despite significant progress over the past 25 years in expanding access to banking services, a recent FDIC survey found that the unbanked rate for Black households was 11.3 percent as compared to 2.1 percent for White households.
Barr suggested several measures for addressing these issues and eradicating discrimination. Banks should actively analyze data to identify where racial disparities occur, conduct on-the-ground testing to identify discriminatory practices, and review AI or other algorithms used in making lending decisions, Barr advised. Banks should also devote resources to stamp out unfair, abusive, or illegal practices, and find opportunities to support and invest in low- and moderate-income (LMI) communities, small businesses, and community infrastructure. Meanwhile, regulators have a clear responsibility to use their supervisory and enforcement tools to make sure banks resolve consumer protection weaknesses, Barr said, adding that regulators should also ensure that rules provide appropriate incentives for banks to invest in LMI communities and lend to such households.
NIST releases new AI framework to help organizations mitigate risk
On January 26, the National Institute of Standards and Technology (NIST) released voluntary guidance to help organizations that design, deploy, or use artificial intelligence (AI) systems mitigate risk. The Artificial Intelligence Risk Management Framework (developed in close collaboration with the private and public sectors pursuant to a Congressional directive under the National Defense Authorization for Fiscal Year 2021), “provides a flexible, structured and measurable process that will enable organizations to address AI risks,” NIST explained. The framework breaks down the process into four high-level functions: govern, map, measure, and manage. These categories, among other things, (i) provide guidance on how to evaluate AI for legal and regulatory compliance and ensure policies, processes, procedures and practices are transparent, robust, and effective; (ii) outline processes for addressing AI risks and benefits arising from third-party software and data; (iii) describe the mapping process for collecting information to establish the context to frame AI-related risks; (iv) provide guidance for employing and measuring “quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts”; and (v) set forth a proposed process for managing and allocating risk management resources. Examples are also provided within the framework to help organizations implement the guidance.
“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” Deputy Commerce Secretary Don Graves said in the announcement. “It should accelerate AI innovation and growth while advancing—rather than restricting or damaging—civil rights, civil liberties and equity for all.”
DOJ, HUD say Fair Housing Act extends to algorithm-based tenant screening
On January 9, the DOJ and HUD announced they filed a joint statement of interest in a pending action alleging discrimination under the Fair Housing Act (FHA) against Black and Hispanic rental applicants based on the use of an algorithm-based tenant screening system. The lawsuit, filed in the U.S. District Court for the District of Massachusetts, alleged that Black and Hispanic rental applications who use housing vouchers to pay part of their rent were denied rental housing due to their “SafeRent Score,” which is derived from the defendants’ algorithm-based screening software. The plaintiffs claimed that the algorithm relies on factors that disproportionately disadvantage Black and Hispanic applicants, such as credit history and non-tenancy related debts, and fails to consider that the use of HUD-funded housing vouchers makes such tenants more likely to pay their rents. Through the statement of interest, the agencies seek to clarify two questions of law they claim the defendants erroneously represented in their motions to dismiss: (i) the appropriate standard for pleading disparate impact claims under the FHA; and (ii) the type of companies that fall under the FHA’s application.
The agencies first challenged that the defendants did not apply the proper pleading standard for a claim of disparate impact under the FHA. Explaining that in order to establish an FHA disparate impact claim, “plaintiffs must show ‘the occurrence of certain outwardly neutral practices’ and ‘a significantly adverse or disproportionate impact on persons of a particular type produced by the defendant’s facially neutral acts or practices,’” The agencies disagreed with the defendants’ assertion that the plaintiffs “must also allege specific facts establishing that the policy is ‘artificial, arbitrary, and unnecessary.” This contention, the agencies said, “conflates the burden-shifting framework for proving disparate impact claims with the pleading burden.” The agencies also rejected arguments that the plaintiffs must challenge the entire “formula” of the scoring system and not just one element in order to allege a statistical disparity, in addition to providing “statistical findings specific to the disparate impact of the scoring system.” According to the agencies, the plaintiffs adequately identified an “essential nexus” between the algorithm’s scoring system and the disproportionate effect on certain rental applicants based on race.
The agencies also explained that residential screening companies, including the defendants, fall under the FHA’s purview. While the defendants argued that the FHA does not apply to companies “that are not landlords and do not make housing decisions, but only offer services to assist those that do make housing decisions,” the agencies contended that this misconstrues the clear statutory language of the FHA and presented case law affirming that FHA liability reaches “a broad array of entities providing housing-related services.”
“Housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities,” Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division stressed. “This filing demonstrates the Justice Department’s commitment to ensuring that the Fair Housing Act is appropriately applied in cases involving algorithms and tenant screening software.”
FTC’s annual PrivacyCon focuses on consumer privacy and security issues
On November 1, the FTC held its annual PrivacyCon event, which hosted research presentations on a wide range of consumer privacy and security issues. Opening the event, FTC Chair Lina Khan stressed the importance of hearing from the academic community on topics related to a range of privacy issues that the FTC and other government bodies may miss. Khan emphasized that regulators cannot wait until new technologies fully emerge to think of ways to implement new laws for safeguarding consumers. “The FTC needs to be on top of this emerging industry now, before problematic business models have time to solidify,” Khan said, adding that the FTC is consistently working on privacy matters and is “prioritizing the use of creative ideas from academia in [its] bread-and-butter work” to craft better remedies to reflect what is actually happening. She highlighted a recent enforcement action taken against an online alcohol marketplace and its CEO for failing to take reasonable steps to prevent two major data breaches (covered by InfoBytes here). Khan noted that while the settlement’s requirements, such as imposing multi-factor authentication requirements and destroying unneeded user data, may not sound “very cutting-edge” they serve as a big step forward for government enforcers. Chief Technology Officer Stephanie Nguyen, who is responsible for leading the charge to integrate technologists across the FTC’s various lines of work, including consumer privacy, discussed the work of these technologists (including AI and security experts, software engineers, designers, and data scientists) to help develop remedies in data security-related enforcement actions and to push companies to not just do the minimum to remediate areas like unreasonable data security but to model best practices for the industry. “We want to see bad actors face real consequences,” Nguyen said, adding that the FTC wants to hold corporate leadership accountable as it did in the enforcement action Khan cited. Nguyen further stressed that there is also a need to address systemic risk by making companies delete illegally collected data and destroy any algorithms derived from the data.
The one-day conference featured several panel sessions covering a number of topics related to consumer surveillance, automated decision-making systems, children’s privacy, devices that listen to users, augmented/virtual reality, interfaces and dark patterns, and advertising technology. Topics addressed during the panels include (i) requiring data brokers to provide accurate information; (ii) understanding how data inaccuracies can disproportionately affect minorities and those living in poverty, and why relying on this data can lead to discriminatory practices; (iii) examining bias and discrimination risks when engaging in emotional artificial intelligence; (iv) understanding automated decision making systems and how the quality of these systems impact populations they are meant to represent; (v) recognizing the lack of transparency related to children’s data collection and use, and the impact various privacy laws, including the Children’s Online Privacy Protection Rule, the General Data Protection Regulation, and the California Consumer Privacy Act, have on the collection/use/sharing of personal data; (vi) recognizing challenges related to cookie-consent interfaces and dark patterns; and (vii) examining how targeted online advertising both in the U.S. and abroad affects consumers.
White House proposes AI “Bill of Rights”
Recently, the Biden administration’s Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights. The blueprint’s proposed framework identifies five principles for guiding the design, use, and deployment of automated systems to protect the public as the use of artificial intelligence grows. The principles center around topics related to stronger safety measures, such as (i) ensuring systems are safe and effective; (ii) implementing proactive protections against algorithmic discrimination; (iii) incorporating built-in privacy protections, including providing the public control over how data is used and ensuring that the data collection meets reasonable expectations and is necessary for the specific context in which it is being collected; (iv) providing notice and explanation as to how an automated system is being used, as well as the resulting outcomes; and (v) ensuring the public is able to opt out from automated systems in favor of a human alternative and has access to a person who can quickly help remedy problems. According to the announcement, the proposed framework’s principles should be incorporated into policies governing systems with “the potential to meaningfully impact” an individual or community’s rights or access to resources and services related to education, housing, credit, employment, health care, government benefits, and financial services, among others.
OCC reports on cybersecurity and financial system resilience
Recently, the OCC released its annual report on cybersecurity and financial system resilience, which describes its cybersecurity policies and procedures, including those adopted in accordance with the Federal Information Security Modernization Act. According to the report, cybersecurity and operational resilience are “top issues for the federal banking system.” The OCC also noted that it has implemented regulations and standards requiring banks to implement information security programs and protect confidential information. For example, the Interagency Guidelines Establishing Standards for Safety and Soundness Standards “require insured banks to have internal controls and information systems appropriate for the size of the institution and for the nature, scope, and risk of its activities and that provide for, among other requirements, effective risk assessment and adequate procedures to safeguard and manage assets.” OCC regulations also, among other things, require banks to file Suspicious Activity Reports when a known or suspected violation of federal law or a suspicious transaction related to illegal activity, or a violation of the Bank Secrecy Act is detected. In regard to examination manuals, the OCC also noted that it uses a risk-based supervision process to evaluate banks’ risk management, identify material and emerging concerns, and require banks to take corrective action when warranted. The report also discussed current and emerging cybersecurity and resilience threats to the banking sector, which include ransomware, account takeover, supply chain risks, and geopolitical threats. Additionally, the OCC noted that it “monitor[s] longer-term technology developments, which may affect cybersecurity and resilience in the future.” The use of artificial intelligence, including machine learning, is one such development that may impact cybersecurity, according to the OCC.
FTC issues report to Congress on use of AI
On June 16, the FTC issued a report to Congress regarding the use of artificial intelligence (AI), warning that policymakers should use caution when relying on AI to combat the spread of harmful online conduct. In the 2021 Appropriations Act, Congress directed the FTC to study and report on whether and how AI “may be used to identify, remove, or take any other appropriate action necessary to address” a wide variety of specified “online harms,” referring specifically to content that is deceptive, fraudulent, manipulated, or illegal. The report suggests that adoption of AI could be problematic, as AI tools can be biased, discriminatory, or inaccurate, and could rely on invasive forms of surveillance. To avoid introducing these additional harms, the report suggests lawmakers instead focus on developing legal frameworks to ensure no additional harm is caused by AI tools used by major technology platforms and others. The report further suggests that Congress, regulators, platforms, scientists, and others focus their attention on creating frameworks to address the following related considerations, among others: (i) the need for human intervention in connection with monitoring the use and decisions of AI tools intended to address harmful content; (ii) the need for meaningful transparency, “which includes the need for it to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used”; and (iii) the need for accountability with respect to the data practices and results of the use of AI tools by platforms and other companies. Other recommendations include use of authentication tools, responsible use of inputs and outputs by data scientist, and using interventions, such as tools that slow the viral spread or otherwise limit the impact of certain harmful content.
The Commission voted 4-1 at an open meeting to send the report to Congress. Commissioner Noah Joshua Phillips issued a dissenting statement, finding that the report provides “short shrift to how and why AI is being used to combat the online harms identified by Congress,” and instead “reads as a general indictment of the technology itself.”
OCC discusses use of AI
On May 13, OCC Deputy Comptroller for Operational Risk Policy Kevin Greenfield testified before the House Financial Services Committee Task Force on Artificial Intelligence (AI) discussing banks' use of AI and innovation in technology services. Among other things, Greenfield addressed the OCC’s approach to innovation and supervisory expectations, as well as the agency’s ongoing efforts to update its technological framework to support its bank supervision mandate. According to Greenfield’s written testimony, the OCC “recognizes the paramount importance of protecting sensitive data and consumer privacy, particularly given the use of consumer data and expanded data sets in some AI applications.” He noted that many banks use AI technologies and are investing in AI research and applications to automate, augment, or replicate human analysis and decision-making tasks. Therefore, the agency “is continuing to update supervisory guidance, examination programs and examiner skills to respond to AI’s growing use.” Greenfield also pointed out that the agency follows a risk-based supervision model focused on safe, sound, and fair banking practices, as well as compliance with laws and regulations, including fair lending and other consumer protection requirements. This risk-based approach includes developing supervisory strategies based upon an individual bank’s risk profile and examiners’ review of new, modified, or expanded products and services. Greenfield further noted that “the OCC is focused on educating examiners on a wide range of AI uses and risks including risks associates with third parties, information security and resilience, compliance, BSA, credit underwriting, and fair lending and data governance, as part of training courses and other educational resources.” According to Greenfield’s oral statement, “banks need effective risk management and controls for model validation and explainability, data management, privacy, and security regardless of whether a bank develops AI tools internally or purchases through a third party.”
DOJ and EEOC address AI employment decision disability discrimination
On May 12, the DOJ and the Equal Employment Opportunity Commission (EEOC) released a technical assistance document addressing disability discrimination when using artificial intelligence (AI) and other software tools to make employment decisions. According to the announcement, the DOJ’s guidance document, Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring, provides a broad overview of rights and responsibilities in plain language, and, among other things, (i) provides examples of technological tools used by employers; (ii) clarifies that employers must consider the impact on different disabilities when designing or choosing technological tools; (iii) describes employers’ obligations under the ADA when using algorithmic decision-making tools; and (iv) provides information for employees on actions they may take if they believe they have experienced discrimination. The EEOC also released a technical assistance document, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, which focuses on preventing discrimination against job seekers and employees with disabilities.