Federal agencies reaffirm commitment to confront AI-based discrimination
On April 25, the CFPB, DOJ, FTC, and Equal Employment Opportunity Commission released a joint statement reaffirming their commitment to protect the public from bias in automated systems and artificial intelligence (AI). “America’s commitment to the core principles of fairness, equality, and justice are deeply embedded in the federal laws that our agencies enforce to protect civil rights, fair competition, consumer protection, and equal opportunity,” the agencies said, emphasizing that existing authorities apply equally to the use of new technologies and responsible innovation as they do to any other conduct. The agencies have previously expressed concerns about potentially harmful AI applications, including black box algorithms, algorithmic marketing and advertising, abusive AI technology usage, digital redlining, and repeat offenders’ use of AI, which may contribute to unlawful discrimination, biases, and violate consumers’ rights.
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” FTC Chair Lina M. Khan said. “Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition,” Khan added.
CFPB Director Rohit Chopra echoed Khan’s sentiments and said the Bureau, along with other agencies, are taking measures to address unchecked AI. “While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening,” Chopra said. “When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias,” Chopra added. The Director’s statements concluded by noting that the Bureau will continue to collaborate with other agencies to enforce federal consumer financial protection laws, regardless of whether the violations occur through traditional means or advanced technologies.
Additionally, Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division noted that “[a]s social media platforms, banks, landlords, employers and other businesses  choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result.”