Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

FTC provides AI guidance

Federal Issues Big Data FTC Artificial Intelligence FTC Act FCRA ECOA Consumer Protection Fintech

Federal Issues

On April 19, the FTC’s Bureau of Consumer Protection wrote a blog post identifying lessons learned to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has addressed the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. The FTC stated that it has used its expertise with these laws to: (i) report on big data analytics and machine learning; (ii) conduct a hearing on algorithms, AI, and predictive analytics; and (iii) issue business guidance on AI and algorithms. To assist companies navigating AI, the FTC has provided the following guidance:

  • Start with the right foundation. From the beginning, companies should consider ways to enhance data sets, design models to account for data gaps, and confine where or how models are used. The FTC advised that if a “data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.” 
  • Watch out for discriminatory outcomes. It is vital for companies to test algorithms—both prior to use and periodically after that—to prevent discrimination based on race, gender, or other protected classes.
  • Embrace transparency and independence. Companies should consider how to embrace transparency and independence, such as “by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening. . . data or source code to outside inspection.”
  • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, company “statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence.”
  • Data transparency. In the FTC guidance on AI last year, as previously covered by InfoBytes, an advisory warned companies to be careful about how they get the data that powers their models.
  • Do more good than harm. Companies are warned that if their models cause “more harm than good—that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair.”
  • Importance of accountability. The FTC warns of the importance of being transparent and independent and cautions companies to hold themselves accountable or the FTC may do it for them.