Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • White House orders DOJ and CFPB to better protect citizens’ sensitive personal data

    Privacy, Cyber Risk & Data Security

    On March 1, the White House released Executive Order 14117 (E.O.) titled “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern” to issue safeguards against Americans’ private information. The E.O. was preceded by the White House’s Fact Sheet which included provisions to protect Americans’ data on their genomic and biometric information, personal health, geolocation, finances, among others. The E.O. shared how this data can be used by nefarious actors such as foreign intelligence services or companies and could enable privacy violations. Under the E.O., President Biden ordered several agencies to act but primarily called on the DOJ. The president directed the DOJ to issue regulations on protecting Americans’ data from being exploited by certain countries. The White House also directed the DOJ to issue regulations to protect government-related data, specifically citing protections for geolocation information and information about military members. Lastly, the DOJ was directed to work with DHS to prevent certain countries’ access to citizens’ data through commercial means and the CFPB was encouraged to “[take] steps, consistent with CFPB’s existing legal authorities, to protect Americans from data brokers that are illegally assembling and selling extremely sensitive data, including that of U.S. military personnel.”

    A few days before, the DOJ released its fact sheet detailing its proposals to implement the White House’s E.O., focusing on national security risks and data security. The fact sheet highlighted that our current laws leave open lawful access to vast amounts of Americans’ sensitive personal data that may be purchased and accessed through commercial relationships. In response to the E.O., the DOJ plans to release future regulations “addressing transactions that involve [Americans’] bulk sensitive data” that pose a risk of access by countries of concern. The countries of concern include China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela. The DOJ will also release its Advance Notice of Proposed Rulemaking (ANPRM) to provide details of the proposal(s) and to solicit comments.

    Privacy, Cyber Risk & Data Security Federal Issues Department of Justice CFPB Executive Order Department of Homeland Security White House Big Data China Russia Iran North Korea Cuba Venezuela

  • CFTC speech highlights new executives, dataset use, and AI Task Force

    Privacy, Cyber Risk & Data Security

    On November 16, the Chairman of the CFTC, Rostin Behnam, delivered a speech during the 2023 U.S. Treasury Market Conference held in New York where he showcased the CFTC’s plans to better use data and roll out an internal AI task force. One of the CFTC’s initiatives comes with the hiring of two new executive-level roles: a Chief Data Officer and a Chief Data Scientist. These executives will manage how the CFTC uses AI tools, and oversee current processes, including understanding large datasets, cleaning the datasets, identifying and monitoring pockets of stress, and combating spoofing.

    The CFTC also unveiled its plans to create an AI Task Force and to “gather[] information about the current and potential uses of AI by our registered entities, registrants, and market participants in areas such as trading, risk management, and cybersecurity.” The Commission plans to obtain feedback for the AI Task Force through a formal Request for Comment process in 2024. The CFTC hopes these comments will help the agency create a rulemaking policy on “safety and security, mitigation of bias, and customer protection.”

    Privacy, Cyber Risk & Data Security CFTC Big Data Artificial Intelligence Spoofing

  • FTC provides AI guidance

    Federal Issues

    On April 19, the FTC’s Bureau of Consumer Protection wrote a blog post identifying lessons learned to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has addressed the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. The FTC stated that it has used its expertise with these laws to: (i) report on big data analytics and machine learning; (ii) conduct a hearing on algorithms, AI, and predictive analytics; and (iii) issue business guidance on AI and algorithms. To assist companies navigating AI, the FTC has provided the following guidance:

    • Start with the right foundation. From the beginning, companies should consider ways to enhance data sets, design models to account for data gaps, and confine where or how models are used. The FTC advised that if a “data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.” 
    • Watch out for discriminatory outcomes. It is vital for companies to test algorithms—both prior to use and periodically after that—to prevent discrimination based on race, gender, or other protected classes.
    • Embrace transparency and independence. Companies should consider how to embrace transparency and independence, such as “by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening. . . data or source code to outside inspection.”
    • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, company “statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence.”
    • Data transparency. In the FTC guidance on AI last year, as previously covered by InfoBytes, an advisory warned companies to be careful about how they get the data that powers their models.
    • Do more good than harm. Companies are warned that if their models cause “more harm than good—that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition—the FTC can challenge the use of that model as unfair.”
    • Importance of accountability. The FTC warns of the importance of being transparent and independent and cautions companies to hold themselves accountable or the FTC may do it for them.

    Federal Issues Big Data FTC Artificial Intelligence FTC Act FCRA ECOA Consumer Protection Fintech

  • FTC provides guidance on managing consumer protection risks when using AI and algorithms

    Federal Issues

    On April 8, the FTC’s Bureau of Consumer Protection wrote a blog post discussing ways for companies to manage the consumer protection risks of artificial intelligence (AI) technology and algorithms. According to the FTC, over the years the Commission has dealt with the challenges presented by the use of AI and algorithms to make decisions about consumers, and has taken many enforcement actions against companies for allegedly violating laws such as the FTC Act, FCRA, and ECOA when using AI and machine learning technology. Financial services companies have also been applying these laws to machine-based credit underwriting models, the FTC stated. To assist companies, the FTC has provided the following guidance:

    • Be transparent. Companies should not mislead consumers about how automated tools will be used and should be transparent when collecting sensitive data to feed an algorithm. Companies that make automated eligibility decisions about “credit, employment, insurance, housing, or similar benefits and transactions” based on information provided by a third-party vendor are required to provide consumers with “adverse action” notices under the FCRA.
    • Explain decisions to consumers. Companies should be specific when disclosing to consumers the reasons why a decision was made if AI or automated tools were used in the decision-making process.
    • Ensure fairness. Companies should avoid discrimination based on protected classes and should consider both inputs and outcomes to manage consumer protection risks inherent in using AI and algorithmic tools. Companies should also provide consumers access and opportunity to dispute the accuracy of the information used to make a decision that may be adverse to the consumer’s interest.
    • Ensure data and models are robust and sound. According to the FTC, companies that compile and sell consumer information for use in automated decision-making to determine a consumer’s eligibility for credit or other transactions (even if they are not a consumer reporting agency), may be subject to the FCRA and should “implement reasonable procedures to ensure maximum possible accuracy of consumer reports and provide consumers with access to their own information, along with the ability to correct any errors.” The AI models should also be validated to ensure they work correctly and do not illegally discriminate.
    • Accountability. Companies should consider several factors before using AI or other automated tools, including the accuracy of the data set, predictions based on big data, and whether the data models account for biases or raise ethical or fairness concerns. Companies should also protect these tools from unauthorized use and consider what accountability mechanisms are being employed to ensure compliance.

    Federal Issues FTC Act FTC Artificial Intelligence ECOA FCRA Big Data Consumer Protection

  • NIST publishes updated Big Data Interoperability Framework

    Privacy, Cyber Risk & Data Security

    On October 21, the National Institute for Standards and Technology (NIST) released the second revision of its Big Data Interoperability Framework (NBDIF), which aims to “develop consensus on important, fundamental concepts related to Big Data” with the understanding that Big Data systems have the potential to “overwhelm traditional technical approaches,” to include traditional approaches regarding privacy and data security. Modest updates were made to Volume 4 of the NBDIF, which focuses on privacy and data security, including recommending a layered approach to Big Data system transparency. With respect to transparency, Volume 4 introduces three levels, starting from level 1, which involves a System Communicator that “provides online explanations to users or stakeholders” discussing how information is processed and retained in a Big Data system, as well as records of “what has been disclosed, accepted, or rejected.” And at the most mature levels, transparency includes developing digital ontologies (multi-level architecture for digital data management) across domain-specific Big Data systems to enable adaptable privacy and security configurations based on user characteristics and populations. Largely intact, however, are the Big Data Safety Levels, in Appendix A which are voluntary (standalone) standards regarding best practices for privacy and data security in Big Data systems, and include application security, business continuity, and transparency aspects.

    Privacy/Cyber Risk & Data Security Big Data NIST

Upcoming Events