Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • CPPA releases latest draft of automated decision-making technology regulation

    State Issues

    The California Privacy Protection Agency (CPPA) released an updated draft of its proposed enforcement regulations for automated decisionmaking technology in connection with its March 8 board meeting. The draft regulations included new definitions, including “automated decisionmaking technology” which means “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking,” which expands its scope from its previous September update (covered by InfoBytes here).

    Among other things, the draft regulations would require businesses that use automated decisionmaking technology to provide consumers with a “Pre-use Notice” to inform consumers on (i) the business’s use of the technology; (ii) their right to opt-out of the business’s use of the automated decisionmaking technology and how they can submit such a request (unless exempt); (iii) a description of their right to access information; and (iv) a description of how the automated decisionmaking technology works, including its intended content and recommendations and how the business plans to use the output. The draft regulations detailed further requirements for the opt-out process.

    The draft regulations also included a new article, entitled “risk assessments,” which provided requirements as to when a business must conduct certain assessments and requirements that process personal information to train automated decisionmaking technology or artificial intelligence. Under the proposed regulations, every business which processes consumers’ personal information may present significant risk to consumers’ privacy and must conduct a risk assessment before initiating that processing. If a business previously conducted a risk assessment for a processing activity in compliance with the article and submitted an abridged risk assessment to the CPPA, and there were no changes, the business is not required to submit an updated risk assessment. The business must, however, submit a certification of compliance to the CPPA.

    The CPPA has not yet started the formal rulemaking process for these regulations and the drafts are provided to facilitate board discussion and public participation, and are subject to change. 

    State Issues Privacy Agency Rule-Making & Guidance California CPPA Artificial Intelligence

  • FCC partners with two U.K. regulators in combating privacy issues and protecting consumer data

    Privacy, Cyber Risk & Data Security

    Recently, the FCC announced (here and here) that it has partnered with two U.K. communications regulatory agencies to address issues regarding privacy and data protection in telecommunications. The FCC announced two separate statements because the two U.K. regulators perform different duties: the first announcement is with the U.K. Information Commissioner’s Office (ICO), which regulates data protection and information rights; the second is with the U.K.’s Office of Communications (OFCOM) which regulates telecommunications. Both announcements highlighted a strengthening of resources and networks to protect consumers on an international scale, given the large amounts of data shared via international telecom carriers.

    The FCC’s announcement with ICO explained that the partnership would be focused on combatting robocall and robotext efforts, as well as finding means to better protect consumer privacy and data concerns. In the FCC’s announcement with the OFCOM, the U.S. regulator announced a new collaboration to combat illegal robocalls and robotexts given the two countries’ shared interest in investigating networking abuses. The FCC elaborated on its desire to bolster requirements for gateway providers: this is the “on-ramp” for international internet traffic into U.S. networks. 

    Privacy, Cyber Risk & Data Security FCC UK Of Interest to Non-US Persons Privacy Data Protection

  • FCC’s Rosenworcel relaunches Consumer Advisory Committee; focuses on AI consumer issues

    Privacy, Cyber Risk & Data Security

    On February 20, the Chairwoman of the FCC, Jessica Rosenworcel, announced that the FCC will relaunch the Consumer Advisory Committee (CAC). The CAC will focus on how emerging artificial intelligence (AI) technologies implicate consumers’ privacies and protections, such as how the FCC can better protect consumers against “unwanted and illegal” calls, among other things. The CAC is a committee with 28 members comprising companies, non-profit entities, trade organizations, and individuals; a full list of members is found here. The first meeting is on April 4, at 10:30 a.m., Eastern Time, and will be open to the public via a live broadcast.

    Privacy, Cyber Risk & Data Security FCC Advisory Committee Artificial Intelligence Privacy

  • California appeals court vacates a ruling on enjoining enforcement of CPRA regulations

    State Issues

    On February 9, California’s Third District Court of Appeal vacated a lower court’s decision to enjoin the California Privacy Protection Agency (CPPA) from enforcing regulations implementing the California Privacy Rights Act (CPRA).  The decision reverses the trial court’s ruling delaying enforcement of the regulations until March 2024, which would have given businesses a one-year implementation period from the date final regulations were promulgated (covered by InfoBytes here).

    The CPRA mandated the CPPA to finalize regulations on specific elements of the act by July 1, 2022, and provided that “the Agency’s enforcement authority would take effect on July 1, 2023,” a one-year gap between promulgation and enforcement. The CPPA did not issue final regulations until March of 2023, but sought to enforce the rules starting on the July 1, 2023, statutory date.  In response, in March 2023, the Chamber of Commerce filed a lawsuit in state court seeking a one-year delay of enforcement for the new regulations.  The trial court held that a delay was warranted because “voters intended there to be a gap between the passing of final regulations and enforcement of those regulations.” On appeal, the court emphasized that there is no explicit and unambiguous language in the law prohibiting the agency from enforcing the CPRA until at least one year after final regulations are approved, and that and found that while the mandatory dates included in the CPRA “amounts to a one-year delay,” such a delay was not mandated by the statutory language. The court further found that there is no indication from the ballot materials available to voters in passing the statute that the voters intended such a one-year delay. The court explained that the one-year gap between regulations could have been interpreted to give businesses time to comply, or as a period for the agency to prepare for enforcing the new rules, or there may also be other reasons for the gap.

    Accordingly, the appellate court held that Chamber of Commerce “was simply not entitled to the relief granted by the trial court.” As a result of the court’s decision, businesses are now required to commence implementing the privacy regulations established by the agency. 

    State Issues Privacy Courts California Appellate CPPA CPRA

  • NIST group releases drafts on TLS 1.3 best practices aimed at the financial industry

    Privacy, Cyber Risk & Data Security

    On January 30, the NIST National Cybersecurity Center of Excellence (NCCoE) released a draft practice guide, titled “Addressing Visibility Challenges with TLS 1.3 within the Enterprise.” The protocol in question, Transport Layer Security (TLS) 1.3, is the most recent iteration of the security protocol most widely used to protect communications over the Internet, but its implementation over TLS 1.2 (the prior version) remains challenging for major industries, including finance, that need to inspect incoming network traffic data for evidence of malware or other malicious activity. A full description of the project can be found here.

    Compared to TLS 1.2, TLS 1.3 is faster and more secure, but the implementation of forward secrecy, i.e., protecting past sessions against compromises of keys or passwords used in future sessions, creates challenges related to data audit and legitimate inspection of network traffic. As a result, NIST released the practice guide to offer guidance on how to implement TLS 1.3 and meet required audit requirements without compromising the TLS 1.3 protocol itself.  The practice guide suggests how businesses improve their technical methods, such as implementing passive inspection architecture either using “rotated bounded-lifetime [Diffie Helman] keys on the destination TLS server” or exported session keys, to support ongoing compliance with financial industry and other regulations––for continuous monitoring for malware and cyberattacks. The draft practice guide is currently under public review with Volumes A and B of the guide open until April 1, 2024. Volume A is a second preliminary draft of an Executive Summary and Volume B is a preliminary draft on the Approach, Architecture, and Security Characteristics. 

    Privacy, Cyber Risk & Data Security Data Internet Privacy NIST

  • EU Commission, Council, and Parliament agree on details of AI Act

    Privacy, Cyber Risk & Data Security

    On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act).  The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.

    Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards.  Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).

    Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.

    Privacy, Cyber Risk & Data Security Privacy European Union Artificial Intelligence Privacy/Cyber Risk & Data Security Of Interest to Non-US Persons

  • NYDFS introduces guidelines for coin-listing and delisting policies in virtual currency entities

    State Issues

    On November 15, NYDFS announced new regulatory guidance which adopts new requirements for coin-listing and delisting policies of DFS-regulated virtual currency entities, updating its 2020 framework for each policy. After considering public comments, the new guidance aims to enhance standards for self-certification of coins and includes requirements for risk assessment, advance notification, and governance. It emphasizes stricter criteria for approving coins and mandates adherence to safety, soundness, and consumer protection principles. Virtual currency entities must comply with these guidelines, requiring DFS approval for coin-listing policies before self-certifying coins, and submitting detailed records for ongoing compliance review. The guidance also outlines procedures for delisting coins and necessitates virtual currency entities to have an approved coin-delisting policy.

    As an example under coin listing policy framework, the letter states that a virtual currency entity risk assessment must be tailored to a virtual currency entity's business activity and can include factors such as (i) technical design and technology risk; (ii) market and liquidity risk; (iii) operational risk; (iv) cybersecurity risk; (v) illicit finance risk; (vi) legal risk; (vii) reputational risk; (viii) regulatory risk; (ix) conflicts of interest; and (x) consumer protection. Regarding consumer protection, NYDFS says that virtual currency entities must “ensure that all customers are treated fairly and are afforded the full protection of all applicable laws and regulations, including protection from unfair, deceptive, or abusive practices.”

    Similar to the listing policy framework, the letter provides a fulsome delisting policy framework. The letter also stated that all virtual currency entities must meet with the DFS by December 8 to preview their draft coin-delisting policies and that final policies must be submitted to DFS for approval by January 31, 2024.

    State Issues Privacy Agency Rule-Making & Guidance Fintech Cryptocurrency Digital Assets NYDFS New York Consumer Protection

  • Minnesota amends health care provision in extensive new law

    Privacy, Cyber Risk & Data Security

    On November 9, the State of Minnesota enacted Chapter 70--S.F.No. 2995, a large bill to amend certain sections of its current health care provisions. The bill covers extensive changes to healthcare provisions, from prescription contraceptives, hearing aids, mental health, long COVID, and childcare, among many others.

    One of the significant new laws requires a hospital to first check if a patient’s bill is eligible for charity care before sending it off to a third-party collection agency. Further, the bill places new requirements on hospitals collecting on a medical debt before it can “garnish wages or bank accounts” of an individual. The Minnesota law also outlines how a hospital wishing to use a third-party collection agency, must first complete an affidavit attesting that it has checked if the patient is eligible for charity care, confirmed proper billing, given the patient the opportunity to apply for charity care, and, under certain circumstances, if the patient is unable to pay in one lump sum, offered a reasonable payment plan instead.

    Privacy Privacy, Cyber Risk & Data Security Minnesota Health Care Medical Debt Debt Collection

  • CPPA continues efforts towards California Privacy Rights Act

    State Issues

    The California Privacy Protection Agency board is continuing its efforts to prepare regulations implementing the California Privacy Rights Act (covered by InfoBytes here and here).

    Draft risk assessment regulations and cybersecurity audit regulations were released in advance of the September 8 open meeting held by the board. Draft regulations on automated decision-making remain to be published. More comprehensive comment and feedback is expected on these draft regulations, unlike regulations finalized in March that were presented in a more robust state. As previously covered by InfoBytes, the California Privacy Protection Agency cannot enforce any regulations until a year after their finalization, adding a ticking reminder to the finalization process for these draft regulations.

    The draft cybersecurity regulations include thoroughness requirements for the annual cybersecurity audit, which must also be completed “using a qualified, objective, independent professional” and “procedures and standards generally accepted in the profession of auditing.” A management certification must also be signed certifying the business has not influenced the audit, and has reviewed the audit and understands its findings.

    The draft risk assessment regulations require conducting a risk assessment prior to initiating processing of consumers’ personal information that “presents significant risk to consumers’ privacy,” as set forth in an enumerated list include the selling or sharing of personal information; processing personal information of consumers under age 16; and using certain automated decision-making technology, including AI.

    State Issues Privacy California CCPA CPPA CPRA Compliance State Regulators Opt-Out Consumer Protection

  • CFPB looking at privacy implications of worker surveillance

    Agency Rule-Making & Guidance

    On June 20, the CFPB released a statement announcing it will be “embarking on an inquiry into the data broker industry and issues raised by new technological developments.” The Bureau requested information in March about entities that purchase information from data brokers, the negative impacts of data broker practices, and the issues consumers face when they wish to see or correct their personal information. (Covered by InfoBytes here.) The findings from this inquiry will help the Bureau understand how employees’ personal information can find its way into the data broker market.

    With similar intentions, the White House Office of Science and Technology Policy (OSTP) released a request for information (RFI) to learn more about the automated tools employers use to monitor, screen, surveil, and manage their employees. The OSTP blog post cited to an increase in the use of technologies that handle employees’ sensitive information and data. The OSTP also highlighted the Biden administration’s Blueprint for an AI Bill of Rights (covered by InfoBytes here), which underscored the importance of building in protections when developing new technologies and understanding associated risks. Responses to the RFI will be used to “inform new policy responses, share relevant research, data, and findings with the public, and amplify best practices among employers, worker organizations, technology vendors, developers, and others in civil society,” the OSTP said.

    The CFPB’s response to the RFI described the agency’s concerns regarding risks to employees’ privacy, noting that it has long received complaints from the public about the lack of transparency and inaccuracies in the employment screening industry. Specifically mentioned are FCRA protections for consumers and guidelines around the sale of personal data. The Bureau also commented that employees may not be at liberty to determine how their information is used, or sold, and have no opportunity for recourse when inaccurately reported information affects their earnings, access to credit, ability to rent a home or buy a car, and more.

    Agency Rule-Making & Guidance Federal Issues Privacy, Cyber Risk & Data Security CFPB Consumer Finance Consumer Protection Privacy Data Brokers Biden FCRA

Pages

Upcoming Events