Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • CFTC’s subcommittee report on decentralized finance highlights its findings and recommendations

    Privacy, Cyber Risk & Data Security

    On January 8, the CFTC issued a report on decentralized finance ahead of the CFTC’s event on artificial intelligence, cybersecurity, and decentralized finance. Authored by the CFTC’s Subcommittee on Digital Assets and Blockchain Technology, which is a group of fintech experts selected by the CFTC, the report urged government and industries to work together and advance the developments of decentralized finance in a responsible and compliant way.

    The report lists many key findings and recommendations for policymakers to implement. For example, the report highlights how policymakers should keep in mind customer and investor protections, promotion of market integrity and financial stability, and efforts to combat illicit finance when creating regulations, among others. Recommendations for policymakers include increasing their technical understanding of this space, surveying the existing regulatory “perimeter,” identifying and cataloging risks, identifying the range of regulatory strategies, and applying regulatory framework on digital identity, KYC and AML regimes, and calibration on privacy in decentralized finance.

    For further learning on decentralized finance, IOSCO released a publication on its nine recommendations, which was previously covered by InfoBytes here.

    Privacy, Cyber Risk & Data Security CFTC Decentralized Finance Blockchain IOSCO Financial Stability

  • FTC alleges data broker company mishandled consumer location data

    Federal Issues

    On January 9, the FTC released a proposed order and complaint against a data broker that sells consumer location data to companies. According to the complaint, which alleges seven violations of the FTC Act, the data broker company had no policies or procedures in place to remove any of the raw data from the location data sets that it sold, which could be used to identify sensitive personal information. The FTC alleges that because of this, the data broker company failed to provide “necessary technical safeguards” to ensure that consumers’ privacy choices were honored. The FTC also alleges that the data broker’s contracts with entities to purchase the data were “insufficient to protect consumers from the substantial injury caused by the collection, transfer, and use of the consumers’ location data” as they visit sensitive locations, such as churches, healthcare facilities, and schools.

    The data broker company collected 10 billion location data points daily worldwide throughout its apps, but it failed to inform its consumers that it sold this data to advertisers, employers, or government contractors. The FTC further alleges that the data broker’s business practices are likely to cause substantial injury to consumers due to its lack of reasonable data security measures.

    According to the proposed order, the company must comply with FTC mandates that include requiring it to prohibit misrepresentations using the data, prohibit the use, sale, or disclosure of sensitive location data, and implement a sensitive location data program. The data broker neither admits nor denies any wrongdoing and the FTC did not levy a money judgment.

    Federal Issues Data Brokers Consumer Data FTC Act Privacy, Cyber Risk & Data Security

  • FSOC report highlights AI, climate, banking, and fintech risks; CFPB comments

    Privacy, Cyber Risk & Data Security

    On December 14, the Financial Stability Oversight Counsel released its 2023 Annual Report on vulnerabilities in financial stability risks and recommendations to mitigate those risks. The report was cited in a statement by the Director of the CFPB, Rohit Chopra, to the Secretary of the Treasury. In his statement, Chopra said “[i]t is not enough to draft reports [on cloud infrastructure and artificial intelligence], we must also act” on plans to focus on ensuring financial stability with respect to digital technology in the upcoming year. In its report, the FSOC notes the U.S. banking system “remains resilient overall” despite several banking issues earlier this year. The FSOC’s analysis breaks down the health of the banking system for large and regional banks through review of a bank’s capital and profitability, credit quality and lending standards, and liquidity and funding. On regional banks specifically, the FSOC highlights how regional banks carry higher exposure rates to all commercial real estate loans over large banks due to the higher interest rates.

    In addition, the FSOC views climate-related financial risks as a threat to U.S. financial stability, presenting both physical and transitional risks. Physical risks are acute events such as floods, droughts, wildfires, or hurricanes, which can lead to additional costs required to reduce risks, firm relocations, or can threaten access to fair credit. Transition risks include technological changes, policy shifts, or changes in consumer preference which can all force firms to take on additional costs. The FSOC notes that, as of September 2023, the U.S. experienced 24 climate disaster events featuring losses that exceed $1 billion, which is more than the past five-year annual average of 18 events (2018 to 2022). The FSOC also notes that member agencies should be engaged in monitoring how third-party service providers, like fintech firms, address risks in core processing, payment services, and cloud computing. To support this need for oversight over these partnerships, the FSOC cites a study on how 95 percent of cloud breaches occur due to human error. The FSOC highlights how fintech firms face risks such as compliance, financial, operational, and reputational risks, specifically when fintech firms are not subject to the same compliance standards as banks.

    Notably, the FSOC is the first top regulator to state that the use of Artificial Intelligence (AI) technology presents an “emerging vulnerability” in the U.S. financial system. The report notes that firms may use AI for fraud detection and prevention, as well as for customer service. The FSOC notes that AI has benefits for financial instruction, including reducing costs, improving inefficiencies, identifying complex relationships, and improving performance. The FSOC states that while “AI has the potential to spur innovation and drive efficiency,” it requires “thoughtful implementation and supervision” to mitigate potential risks.

    Privacy, Cyber Risk & Data Security Bank Regulatory FSOC CFPB Artificial Intelligence Banks Fintech

  • EU Commission, Council, and Parliament agree on details of AI Act

    Privacy, Cyber Risk & Data Security

    On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act).  The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.

    Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards.  Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).

    Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.

    Privacy, Cyber Risk & Data Security Privacy European Union Artificial Intelligence Privacy/Cyber Risk & Data Security Of Interest to Non-US Persons

  • NYDFS settles with title insurance company for $1 million

    Privacy, Cyber Risk & Data Security

    On November 27, the NYDFS entered into a consent order with a title insurance company, which required the company to pay $1 million for failing to maintain and implement an effective cybersecurity policy and correct a cybersecurity vulnerability. The vulnerability allowed members of the public to access others’ nonpublic information, including driver’s license numbers, social security numbers, and tax and banking information. The consent order indicates the title insurance company discovered the vulnerability as early as 2018. The title insurance company’s failure to correct these changes violated Section 500.7 of the Cybersecurity Regulation.

    In May 2019, a cybersecurity journalist published an article on the existence of a vulnerability in the title insurance company’s application, that led to a public exposure of 885 million documents, some found through search engine results. The journalist noted that “replacing the document ID in the web page URL… allow[ed] access to other non-related sessions without authentication.” Following the cybersecurity journalist’s article, and as required by Section 500.17(a) of the Cybersecurity Regulation, the title insurance company notified NYDFS of its vulnerability, at which point NYDFS investigated further. The title insurance company has been ordered to pay the penalty no later than ten days after the effective date.

    Privacy, Cyber Risk & Data Security State Issues Securities NYDFS Auto Insurance Enforcement

  • FTC orders prison contractor to fix security exposures after data breach

    Privacy, Cyber Risk & Data Security

    On November 16, the FTC issued a proposed order against an integrated technology services company finding a violation of Section 5(a) of the Federal Trade Commission Act. According to the order, the company offered various products and services to jails, prisons, and detention facilities. These products and services included means of communication between incarcerated and non-incarcerated individuals, and, among other things, allowed non-incarcerated individuals to deposit funds into the accounts of incarcerated individuals. According to the complaint, and due to the nature of its operations, the company collected individuals’ sensitive personally identifiable information, including names, addresses, passport numbers, driver’s license numbers, Social Security numbers, and financial account information, some of which was exposed as a result of a data breach in August 2020 due to a misconfiguration in the company’s cloud storage environment.

    In its decision, the FTC ordered the company to, among other things, (i) implement a comprehensive data security program, including “change management” measures and multifactor authentication; (ii) notify users affected by the data breach, who had not yet received notice, and offer credit monitoring and identity protection products; (iii) inform consumers and facilities within 30 days of future data breaches; and (iv) notify the FTC within 10 days of reporting any security incident to local, state, or federal authorities.

    Privacy, Cyber Risk & Data Security Federal Issues FTC Data Enforcement

  • CFTC speech highlights new executives, dataset use, and AI Task Force

    Privacy, Cyber Risk & Data Security

    On November 16, the Chairman of the CFTC, Rostin Behnam, delivered a speech during the 2023 U.S. Treasury Market Conference held in New York where he showcased the CFTC’s plans to better use data and roll out an internal AI task force. One of the CFTC’s initiatives comes with the hiring of two new executive-level roles: a Chief Data Officer and a Chief Data Scientist. These executives will manage how the CFTC uses AI tools, and oversee current processes, including understanding large datasets, cleaning the datasets, identifying and monitoring pockets of stress, and combating spoofing.

    The CFTC also unveiled its plans to create an AI Task Force and to “gather[] information about the current and potential uses of AI by our registered entities, registrants, and market participants in areas such as trading, risk management, and cybersecurity.” The Commission plans to obtain feedback for the AI Task Force through a formal Request for Comment process in 2024. The CFTC hopes these comments will help the agency create a rulemaking policy on “safety and security, mitigation of bias, and customer protection.”

    Privacy, Cyber Risk & Data Security CFTC Big Data Artificial Intelligence Spoofing

  • Minnesota amends health care provision in extensive new law

    Privacy, Cyber Risk & Data Security

    On November 9, the State of Minnesota enacted Chapter 70--S.F.No. 2995, a large bill to amend certain sections of its current health care provisions. The bill covers extensive changes to healthcare provisions, from prescription contraceptives, hearing aids, mental health, long COVID, and childcare, among many others.

    One of the significant new laws requires a hospital to first check if a patient’s bill is eligible for charity care before sending it off to a third-party collection agency. Further, the bill places new requirements on hospitals collecting on a medical debt before it can “garnish wages or bank accounts” of an individual. The Minnesota law also outlines how a hospital wishing to use a third-party collection agency, must first complete an affidavit attesting that it has checked if the patient is eligible for charity care, confirmed proper billing, given the patient the opportunity to apply for charity care, and, under certain circumstances, if the patient is unable to pay in one lump sum, offered a reasonable payment plan instead.

    Privacy Privacy, Cyber Risk & Data Security Minnesota Health Care Medical Debt Debt Collection

  • FTC approves amendment to Safeguards Rule requiring nonbanks to report data breaches

    Privacy, Cyber Risk & Data Security

    On October 27, the FTC approved an amendment to the Safeguards Rule to require nonbanks to report data breaches. Under the amended rule, financial institutions, including mortgage brokers, motor vehicle dealers, and payday lenders, will be required to notify the FTC of data breaches as soon as possible, and no later than 30 days after the discovery of incident involving at least 500 consumers. Notice of an incident is required if unencrypted consumer information was acquired without their authorization, as the FTC noted that encrypted consumer information is unlikely to cause consumer harm. The FTC will provide an online form that will be used to report certain information, including the type of information involved in the security event and the number of consumers affected or potentially affected. Additionally, the amended rule will require nonbanks to “to develop, implement, and maintain a comprehensive security program to keep their customers’ information safe.” As previously covered by InfoBytes, the FTC recently extended compliance on some Safeguards provisions finalized in October 2021 (covered by InfoBytes here), to June of this year.

    The commission voted 3-0 to publish the amendment, which will become effective 180 days after its publication in the Federal Register.

    Privacy, Cyber Risk & Data Security Federal Issues Data Breach FTC Safeguards Rule Nonbank Supervision

  • President Biden issues Executive Order targeting AI safety

    Federal Issues

    On October 30, President Biden issued an Executive Order (EO) outlining how the federal government can promote artifical intelligence (AI) safety and security to protect US citizens’ rights by: (i) directing AI developers to share critical information and test results with the U.S. government; (ii) developing standards for safe and secure AI systems; (iii) protecting citizens from AI-enabled fraud; (iv) establishing a cybersecurity program; and (v) creating a National Security Memorandum developed by the National Security Council to address AI security.

    President Biden also called on Congress to act by passing “bipartisan data privacy legislation” that (i) prioritizes federal support for privacy preservation; (ii) strengthens privacy technologies; (iii) evaluates agencies’ information collection processes for AI risks; and (iv) develops guidelines for federal agencies to evaluate privacy-preserving techniques. The EO additionally encourages agencies to use existing authorities to protect consumers and promote equity. As previously covered by InfoBytes, the FCC recently proposed to use AI to block unwanted robocalls and texts). The order further outlines how the U.S. can continue acting as a leader in AI innovation by catalyzing AI research, promoting a fair and competitive AI ecosystem, and expanding the highly skilled workforce by streamlining visa review.

    Federal Issues Privacy, Cyber Risk & Data Security White House Artificial Intelligence Biden Executive Order Consumer Protection

Pages

Upcoming Events