Skip to main content
Menu Icon Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • FTC provides AI guidance

    Federal Issues

    On April 19, the FTC’s Bureau of Consumer Protection wrote a blog post identifying lessons learned to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has addressed the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. The FTC stated that it has used its expertise with these laws to: (i) report on big data analytics and machine learning; (ii) conduct a hearing on algorithms, AI, and predictive analytics; and (iii) issue business guidance on AI and algorithms. To assist companies navigating AI, the FTC has provided the following guidance:

    • Start with the right foundation. From the beginning, companies should consider ways to enhance data sets, design models to account for data gaps, and confine where or how models are used. The FTC advised that if a “data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.” 
    • Watch out for discriminatory outcomes. It is vital for companies to test algorithms—both prior to use and periodically after that—to prevent discrimination based on race, gender, or other protected classes.
    • Embrace transparency and independence. Companies should consider how to embrace transparency and independence, such as “by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening. . . data or source code to outside inspection.”
    • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, company “statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence.”
    • Data transparency. In the FTC guidance on AI last year, as previously covered by InfoBytes, an advisory warned companies to be careful about how they get the data that powers their models.
    • Do more good than harm. Companies are warned that if their models cause “more harm than good—that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair.”
    • Importance of accountability. The FTC warns of the importance of being transparent and independent and cautions companies to hold themselves accountable or the FTC may do it for them.

    Federal Issues Big Data FTC Artificial Intelligence FTC Act FCRA ECOA Consumer Protection Fintech

  • FTC provides guidance on managing consumer protection risks when using AI and algorithms

    Federal Issues

    On April 8, the FTC’s Bureau of Consumer Protection wrote a blog post discussing ways for companies to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has dealt with the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. Financial services companies have also been applying these laws to machine-based credit underwriting models, the FTC stated. To assist companies, the FTC has provided the following guidance:

    • Be transparent. Companies should not mislead consumers about how automated tools will be used and should be transparent when collecting sensitive data to feed an algorithm. Companies that make automated eligibility decisions about “credit, employment, insurance, housing, or similar benefits and transactions” based on information provided by a third-party vendor are required to provide consumers with “adverse action” notices under the FCRA.
    • Explain decisions to consumers. Companies should be specific when disclosing to consumers the reasons why a decision was made if AI or automated tools were used in the decision-making process.
    • Ensure fairness. Companies should avoid discrimination based on protected classes and should consider both inputs and outcomes to manage consumer protection risks inherent in using AI and algorithmic tools. Companies should also provide consumers access and opportunity to dispute the accuracy of the information used to make a decision that may be adverse to the consumer’s interest.
    • Ensure data and models are robust and sound. According to the FTC, companies that compile and sell consumer information for use in automated decision-making to determine a consumer’s eligibility for credit or other transactions (even if they are not a consumer reporting agency), may be subject to the FCRA and should “implement reasonable procedures to ensure maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors.” The AI models should also be validated to ensure they work correctly and do not illegally discriminate.
    • Accountability. Companies should consider several factors before using AI or other automated tools, including the accuracy of the data set, predictions based on big data, and whether the data models account for biases or raise ethical or fairness concerns. Companies should also protect these tools from unauthorized use and consider what accountability mechanisms are being employed to ensure compliance.

    Federal Issues FTC Act FTC Artificial Intelligence ECOA FCRA Big Data Consumer Protection

  • NIST publishes updated Big Data Interoperability Framework

    Privacy, Cyber Risk & Data Security

    On October 21, the National Institute for Standards and Technology (NIST) released the second revision of its Big Data Interoperability Framework (NBDIF), which aims to “develop consensus on important, fundamental concepts related to Big Data” with the understanding that Big Data systems have the potential to “overwhelm traditional technical approaches,” to include traditional approaches regarding privacy and data security. Modest updates were made to Volume 4 of the NBDIF, which focuses on privacy and data security, including recommending a layered approach to Big Data system transparency. With respect to transparency, Volume 4 introduces three levels, starting from level 1, which involves a System Communicator that “provides online explanations to users or stakeholders” discussing how information is processed and retained in a Big Data system, as well as records of “what has been disclosed, accepted, or rejected.” And at the most mature levels, transparency includes developing digital ontologies (multi-level architecture for digital data management) across domain-specific Big Data systems to enable adaptable privacy and security configurations based on user characteristics and populations. Largely intact, however, are the Big Data Safety Levels, in Appendix A which are voluntary (standalone) standards regarding best practices for privacy and data security in Big Data systems, and include application security, business continuity, and transparency aspects.

    Privacy/Cyber Risk & Data Security Big Data NIST