Skip to main content
Menu Icon Menu Icon

InfoBytes Blog

Financial Services Law Insights and Observations

FTC provides guidance on managing consumer protection risks when using AI and algorithms

Federal Issues FTC Act FTC Artificial Intelligence ECOA FCRA Big Data Consumer Protection

Federal Issues

On April 8, the FTC’s Bureau of Consumer Protection wrote a blog post discussing ways for companies to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has dealt with the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. Financial services companies have also been applying these laws to machine-based credit underwriting models, the FTC stated. To assist companies, the FTC has provided the following guidance:

  • Be transparent. Companies should not mislead consumers about how automated tools will be used and should be transparent when collecting sensitive data to feed an algorithm. Companies that make automated eligibility decisions about “credit, employment, insurance, housing, or similar benefits and transactions” based on information provided by a third-party vendor are required to provide consumers with “adverse action” notices under the FCRA.
  • Explain decisions to consumers. Companies should be specific when disclosing to consumers the reasons why a decision was made if AI or automated tools were used in the decision-making process.
  • Ensure fairness. Companies should avoid discrimination based on protected classes and should consider both inputs and outcomes to manage consumer protection risks inherent in using AI and algorithmic tools. Companies should also provide consumers access and opportunity to dispute the accuracy of the information used to make a decision that may be adverse to the consumer’s interest.
  • Ensure data and models are robust and sound. According to the FTC, companies that compile and sell consumer information for use in automated decision-making to determine a consumer’s eligibility for credit or other transactions (even if they are not a consumer reporting agency), may be subject to the FCRA and should “implement reasonable procedures to ensure maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors.” The AI models should also be validated to ensure they work correctly and do not illegally discriminate.
  • Accountability. Companies should consider several factors before using AI or other automated tools, including the accuracy of the data set, predictions based on big data, and whether the data models account for biases or raise ethical or fairness concerns. Companies should also protect these tools from unauthorized use and consider what accountability mechanisms are being employed to ensure compliance.
Share page with AddThis