Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • Barr says AI should not create racial disparities in lending

    On February 7, Federal Reserve Board Vice Chair for Supervision, Michael S. Barr, delivered remarks during the “Banking on Financial Inclusion” conference, where he warned financial institutions to make sure that using artificial intelligence (AI) and algorithms does not create racial disparities in lending decisions. Banks “should review the underlying models, such as their credit scoring and underwriting systems, as well as their marketing and loan servicing activities, just as they should for more traditional models,” Barr said, pointing to findings that show “significant and troubling disparities in lending outcomes for Black individuals and businesses relative to others.” He commented that “[w]hile research suggests that progress has been made in addressing racial discrimination in mortgage lending, regulators continue to find evidence of redlining and pricing discrimination in mortgage lending at individual institutions.” Studies have also found persistent discrimination in other markets, including auto lending and lending to Black-owned businesses. Barr further commented that despite significant progress over the past 25 years in expanding access to banking services, a recent FDIC survey found that the unbanked rate for Black households was 11.3 percent as compared to 2.1 percent for White households.

    Barr suggested several measures for addressing these issues and eradicating discrimination. Banks should actively analyze data to identify where racial disparities occur, conduct on-the-ground testing to identify discriminatory practices, and review AI or other algorithms used in making lending decisions, Barr advised. Banks should also devote resources to stamp out unfair, abusive, or illegal practices, and find opportunities to support and invest in low- and moderate-income (LMI) communities, small businesses, and community infrastructure. Meanwhile, regulators have a clear responsibility to use their supervisory and enforcement tools to make sure banks resolve consumer protection weaknesses, Barr said, adding that regulators should also ensure that rules provide appropriate incentives for banks to invest in LMI communities and lend to such households.

    Bank Regulatory Federal Issues Federal Reserve Supervision Discrimination Artificial Intelligence Algorithms Consumer Finance Fair Lending

  • DOJ, HUD say Fair Housing Act extends to algorithm-based tenant screening

    Federal Issues

    On January 9, the DOJ and HUD announced they filed a joint statement of interest in a pending action alleging discrimination under the Fair Housing Act (FHA) against Black and Hispanic rental applicants based on the use of an algorithm-based tenant screening system. The lawsuit, filed in the U.S. District Court for the District of Massachusetts, alleged that Black and Hispanic rental applications who use housing vouchers to pay part of their rent were denied rental housing due to their “SafeRent Score,” which is derived from the defendants’ algorithm-based screening software. The plaintiffs claimed that the algorithm relies on factors that disproportionately disadvantage Black and Hispanic applicants, such as credit history and non-tenancy related debts, and fails to consider that the use of HUD-funded housing vouchers makes such tenants more likely to pay their rents. Through the statement of interest, the agencies seek to clarify two questions of law they claim the defendants erroneously represented in their motions to dismiss: (i) the appropriate standard for pleading disparate impact claims under the FHA; and (ii) the type of companies that fall under the FHA’s application.

    The agencies first challenged that the defendants did not apply the proper pleading standard for a claim of disparate impact under the FHA. Explaining that in order to establish an FHA disparate impact claim, “plaintiffs must show ‘the occurrence of certain outwardly neutral practices’ and ‘a significantly adverse or disproportionate impact on persons of a particular type produced by the defendant’s facially neutral acts or practices,’” The agencies disagreed with the defendants’ assertion that the plaintiffs “must also allege specific facts establishing that the policy is ‘artificial, arbitrary, and unnecessary.” This contention, the agencies said, “conflates the burden-shifting framework for proving disparate impact claims with the pleading burden.” The agencies also rejected arguments that the plaintiffs must challenge the entire “formula” of the scoring system and not just one element in order to allege a statistical disparity, in addition to providing “statistical findings specific to the disparate impact of the scoring system.” According to the agencies, the plaintiffs adequately identified an “essential nexus” between the algorithm’s scoring system and the disproportionate effect on certain rental applicants based on race.

    The agencies also explained that residential screening companies, including the defendants, fall under the FHA’s purview. While the defendants argued that the FHA does not apply to companies “that are not landlords and do not make housing decisions, but only offer services to assist those that do make housing decisions,” the agencies contended that this misconstrues the clear statutory language of the FHA and presented case law affirming that FHA liability reaches “a broad array of entities providing housing-related services.”

    “Housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities,” Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division stressed. “This filing demonstrates the Justice Department’s commitment to ensuring that the Fair Housing Act is appropriately applied in cases involving algorithms and tenant screening software.”

    Federal Issues Courts DOJ Fair Housing Act Artificial Intelligence HUD Algorithms Discrimination Disparate Impact

  • CFPB fines fintech for algorithm-induced overdraft charges

    Federal Issues

    On August 10, the CFPB announced a consent order against a California-based fintech company for allegedly using an algorithm that caused consumers to be charged overdrafts on their checking accounts when using the company’s personal finance-management app. According to the Bureau, the app promotes automated savings with a proprietary algorithm, which analyzes consumers’ checking-account data to determine when and how much to save for each consumer. The app then automatically transfers funds from consumers’ checking accounts to accounts held in the company’s name. The Bureau asserted, however, that the company engaged in deceptive acts or practices in violation of the CFPA by (i) causing consumers’ checking accounts to incur overdraft charges from their banks even though it guaranteed no overdrafts and represented that its app never transferred more than a consumer could afford; (ii) representing that it would reimburse overdraft charges (the Bureau claims the company has received nearly 70,000 overdraft-reimbursement requests since 2017); and (iii) keeping interest that should have gone to consumers even though it told consumers it would not keep any interest earned on consumer funds. Under the terms of the consent order, the company is required to provide consumer redress for overdraft charges that it previously denied and must pay a $2.7 million civil penalty.

    Federal Issues CFPB Enforcement Consumer Finance Fintech Algorithms Overdraft Deceptive UDAAP CFPA

  • Special Alert: DOJ settles claims of algorithmic bias

    Federal Issues

    On June 21,  the United States Department of Justice announced that it had secured a “groundbreaking” settlement resolving claims brought against a large social media platform for allegedly engaging in discriminatory advertising in violation of the Fair Housing Act. The settlement is one of the first significant federal actions involving claims of algorithmic bias and may indicate the complexity of applying “disparate impact” analysis under the anti-discrimination laws to complex algorithms in this area of increasingly intense regulatory focus.

    Federal Issues DOJ Special Alerts Fair Housing Act Algorithms Advertisement Enforcement Settlement Disparate Impact Discrimination

Upcoming Events