DOJ Settles Claims of Algorithmic Bias


3 minute read | June.22.2022

On June 21, the United States Department of Justice announced that it had secured a “groundbreaking” settlement resolving claims brought against a large social media platform for allegedly engaging in discriminatory advertising in violation of the Fair Housing Act. The settlement is one of the first significant federal actions involving claims of algorithmic bias and may indicate the complexity of applying “disparate impact” analysis under the anti-discrimination laws to complex algorithms in this area of increasingly intense regulatory focus.

The complaint

The complaint alleges that the social media company’s ad targeting and delivery system relied on vast troves of data that the company had collected about its users, including data regarding users’ race, religion, sex, disability, national origin, or familial status, and allowed advertisers to target ads based on protected characteristics or proxies for protected characteristics. The complaint alleges that the company discriminated in three related ways:

  • First, it alleges that until 2019 the company allowed advertisers to target housing ads based on consumers’ protected characteristics, including race, color, national origin, sex, disability, and familial status, or proxies for those characteristics. 
  • Second, it alleges that the company’s advertising platform employed a machine-learning algorithm that allowed advertisers to target ads to consumers who “look like” a particular kind of “source audience,” defined, in part, by reference to protected characteristics, including sex, or proxies for protected characteristics.
  • Third, it alleges that the company’s “personalization algorithms” resulted in certain housing ads being targeted to potential customers — or not provided to potential customers — based on protected characteristics, or proxies for those characteristics.

The complaint follows the Department of Housing and Urban Development’s charge of discrimination filed in 2019, which the company elected to adjudicate in federal district court. It also follows the settlement of private litigation in 2019, which addressed some but not all of the conduct alleged in yesterday’s complaint.

The settlement

The company denied all wrongdoing but agreed to the entry of an order requiring it to change its advertising practices in order to resolve the claims. The settlement, if entered by the court, would require the company to:

  • Ensure that all housing ads are available to the platform’s users.
  • Stop using the tool that allows targeting of housing ads based on consumers who “look like” a particular “source audience” by December 31, 2022.
  • Develop a new system that addresses disparities between advertisers’ target audiences and the consumers who actually receive ads based on the company’s personalization algorithms. The new system will be reviewed by an independent third party to ensure that it meets agreed upon metrics for reducing the alleged disparities.
  • Cease providing targeting options for housing ads that describe or relate to protected characteristics under the Fair Housing Act.
  • Pay a civil money penalty of $115,054.

The settlement is contingent on the parties reaching agreement regarding the sufficiency of the company’s new system for personalizing ads for consumers by December 31, 2022. If they are unable to, the settlement will be void, and the parties will once again be in litigation.

The significance

Agencies across the federal government have been warning regulated institutions that the use of machine learning to crunch reams of data could lead to discriminatory decisions if the data itself relies on protected characteristics or close proxies of such characteristics. In March 2021, federal financial regulators issued a Request for Information regarding the use of artificial intelligence, including machine learning, by financial institutions. The agencies have not yet issued comprehensive guidance regarding the use of artificial intelligence. Accordingly, regulated financial institutions subject to fair lending laws should consider how the principles underlying DOJ’s claims and the negotiated resolution of those claims, would apply to their own use of algorithmic decision making.

If you have any questions regarding the settlement, please contact John Coleman or an Orrick attorney with whom you have worked in the past.