Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On April 8, the FTC’s Bureau of Consumer Protection wrote a blog post discussing ways for companies to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has dealt with the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. Financial services companies have also been applying these laws to machine-based credit underwriting models, the FTC stated. To assist companies, the FTC has provided the following guidance:
- Be transparent. Companies should not mislead consumers about how automated tools will be used and should be transparent when collecting sensitive data to feed an algorithm. Companies that make automated eligibility decisions about “credit, employment, insurance, housing, or similar benefits and transactions” based on information provided by a third-party vendor are required to provide consumers with “adverse action” notices under the FCRA.
- Explain decisions to consumers. Companies should be specific when disclosing to consumers the reasons why a decision was made if AI or automated tools were used in the decision-making process.
- Ensure fairness. Companies should avoid discrimination based on protected classes and should consider both inputs and outcomes to manage consumer protection risks inherent in using AI and algorithmic tools. Companies should also provide consumers access and opportunity to dispute the accuracy of the information used to make a decision that may be adverse to the consumer’s interest.
- Ensure data and models are robust and sound. According to the FTC, companies that compile and sell consumer information for use in automated decision-making to determine a consumer’s eligibility for credit or other transactions (even if they are not a consumer reporting agency), may be subject to the FCRA and should “implement reasonable procedures to ensure maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors.” The AI models should also be validated to ensure they work correctly and do not illegally discriminate.
- Accountability. Companies should consider several factors before using AI or other automated tools, including the accuracy of the data set, predictions based on big data, and whether the data models account for biases or raise ethical or fairness concerns. Companies should also protect these tools from unauthorized use and consider what accountability mechanisms are being employed to ensure compliance.
On October 21, the National Institute for Standards and Technology (NIST) released the second revision of its Big Data Interoperability Framework (NBDIF), which aims to “develop consensus on important, fundamental concepts related to Big Data” with the understanding that Big Data systems have the potential to “overwhelm traditional technical approaches,” to include traditional approaches regarding privacy and data security. Modest updates were made to Volume 4 of the NBDIF, which focuses on privacy and data security, including recommending a layered approach to Big Data system transparency. With respect to transparency, Volume 4 introduces three levels, starting from level 1, which involves a System Communicator that “provides online explanations to users or stakeholders” discussing how information is processed and retained in a Big Data system, as well as records of “what has been disclosed, accepted, or rejected.” And at the most mature levels, transparency includes developing digital ontologies (multi-level architecture for digital data management) across domain-specific Big Data systems to enable adaptable privacy and security configurations based on user characteristics and populations. Largely intact, however, are the Big Data Safety Levels, in Appendix A which are voluntary (standalone) standards regarding best practices for privacy and data security in Big Data systems, and include application security, business continuity, and transparency aspects.
- Jonice Gray Tucker to moderate “Pandemic relief response and lasting impacts on access, credit, banking, and equality” at the American Bar Association Business Law Section Spring Meeting
- Jeffrey P. Naimon to discuss "Post-pandemic CFPB exam preparation" at the Mortgage Bankers Association Spring Conference & Expo
- Jonice Gray Tucker to discuss "Making fair lending work for you" at the Mortgage Bankers Association Spring Conference & Expo
- Jonice Gray Tucker to discuss "Reading the tea leaves of President Biden’s initial financial appointees" at LendIt Fintech
- Moorari K. Shah to discuss “CA, NY, federal licensing and disclosure” at the Equipment Leasing & Finance Association Legal Forum
- Jonice Gray Tucker to discuss "Compliance under Biden" at the WSJ Risk & Compliance Forum
- Sherry-Maria Safchuk to discuss UDAAP at an American Bar Association webinar
- Jonice Gray Tucker to discuss “The future of fair lending” at the Mortgage Bankers Association Legal Issues and Regulatory Compliance Conference