InfoBytes Blog
Filter
Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
Georgia bans CBDCs for government use
Recently, Georgia enacted HB 1053 (the “Act”) which will prohibit government agencies from engaging with central bank digital currencies (CBDCs). Specifically, the legislation will prevent state government agencies from accepting CBDCs as a form of payment or from participating in any pilot programs involving CBDCs. Georgia representatives banned CBDCs within government operations citing potential “privacy and security concerns” for individuals and businesses, called them an “unacceptable expansion” of federal authority, and were concerned that a CBDC could disrupt the current banking systems and “diminish” community bank and credit unions’ roles in the financial system. The ban will go into effect on July 1.
CFPB finalizes standards setting body component of open banking rule
On June 5, the CFPB announced it finalized in part its proposed Personal Financial Data Rights rule, thus establishing the minimum qualifications necessary for the Bureau to become a recognized industry standard setting body when the full rule becomes final. Last October, the CFPB proposed the Personal Financial Data Rights rule to implement Section 1033 of the CFPA (covered by InfoBytes here) which was intended to offer consumers more control over their financial data and more consumer protections for misused data.
After considering relevant public comments, the CFPB made several changes to the sections concerning standard setters and the standards they issue. Commenters asked for clarity regarding changes in standards, such as when a consensus standard ceases to have consensus status, and how it could potentially cause market uncertainty. In response, the Bureau replaced the term “qualified industry standard” with “consensus standard” and added a newly defined “recognized standard setter” term. The final rule defined “consensus standard” to clarify when a given standard will be a consensus standard, and also added that a “consensus standard” must be one that will be adopted and maintained by a recognized standard setter. In response to concerns about market uncertainty, the CFPB responded that they expect revocation of recognition for a standard setter to be a rare occurrence.
Regarding periodic review, the final rule extended the maximum duration of the CFPB’s recognition of a standard-setting body from the proposed duration of three years to five years. The Bureau expects this change will incentivize standard-setting bodies to obtain recognition. The final rule included “data recipients” as an interested party in response to commenter concern that certain fintech sectors may be excluded. Additionally, meeting the criteria in the final rule is just the starting point for approval, as the CFPB may also assess whether the standard-setting body will be committed to developing and upholding open banking standards.
The final rule also included a guide that detailed how standard setters can apply for CFPB recognition, how the Bureau will evaluate applications, and what standard setters can expect once recognized. The final rule will go into effect 30 days after publication in the Federal Register.
CPPA releases latest draft of automated decision-making technology regulation
The California Privacy Protection Agency (CPPA) released an updated draft of its proposed enforcement regulations for automated decisionmaking technology in connection with its March 8 board meeting. The draft regulations included new definitions, including “automated decisionmaking technology” which means “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking,” which expands its scope from its previous September update (covered by InfoBytes here).
Among other things, the draft regulations would require businesses that use automated decisionmaking technology to provide consumers with a “Pre-use Notice” to inform consumers on (i) the business’s use of the technology; (ii) their right to opt-out of the business’s use of the automated decisionmaking technology and how they can submit such a request (unless exempt); (iii) a description of their right to access information; and (iv) a description of how the automated decisionmaking technology works, including its intended content and recommendations and how the business plans to use the output. The draft regulations detailed further requirements for the opt-out process.
The draft regulations also included a new article, entitled “risk assessments,” which provided requirements as to when a business must conduct certain assessments and requirements that process personal information to train automated decisionmaking technology or artificial intelligence. Under the proposed regulations, every business which processes consumers’ personal information may present significant risk to consumers’ privacy and must conduct a risk assessment before initiating that processing. If a business previously conducted a risk assessment for a processing activity in compliance with the article and submitted an abridged risk assessment to the CPPA, and there were no changes, the business is not required to submit an updated risk assessment. The business must, however, submit a certification of compliance to the CPPA.
The CPPA has not yet started the formal rulemaking process for these regulations and the drafts are provided to facilitate board discussion and public participation, and are subject to change.
FCC partners with two U.K. regulators in combating privacy issues and protecting consumer data
Recently, the FCC announced (here and here) that it has partnered with two U.K. communications regulatory agencies to address issues regarding privacy and data protection in telecommunications. The FCC announced two separate statements because the two U.K. regulators perform different duties: the first announcement is with the U.K. Information Commissioner’s Office (ICO), which regulates data protection and information rights; the second is with the U.K.’s Office of Communications (OFCOM) which regulates telecommunications. Both announcements highlighted a strengthening of resources and networks to protect consumers on an international scale, given the large amounts of data shared via international telecom carriers.
The FCC’s announcement with ICO explained that the partnership would be focused on combatting robocall and robotext efforts, as well as finding means to better protect consumer privacy and data concerns. In the FCC’s announcement with the OFCOM, the U.S. regulator announced a new collaboration to combat illegal robocalls and robotexts given the two countries’ shared interest in investigating networking abuses. The FCC elaborated on its desire to bolster requirements for gateway providers: this is the “on-ramp” for international internet traffic into U.S. networks.
FCC’s Rosenworcel relaunches Consumer Advisory Committee; focuses on AI consumer issues
On February 20, the Chairwoman of the FCC, Jessica Rosenworcel, announced that the FCC will relaunch the Consumer Advisory Committee (CAC). The CAC will focus on how emerging artificial intelligence (AI) technologies implicate consumers’ privacies and protections, such as how the FCC can better protect consumers against “unwanted and illegal” calls, among other things. The CAC is a committee with 28 members comprising companies, non-profit entities, trade organizations, and individuals; a full list of members is found here. The first meeting is on April 4, at 10:30 a.m., Eastern Time, and will be open to the public via a live broadcast.
California appeals court vacates a ruling on enjoining enforcement of CPRA regulations
On February 9, California’s Third District Court of Appeal vacated a lower court’s decision to enjoin the California Privacy Protection Agency (CPPA) from enforcing regulations implementing the California Privacy Rights Act (CPRA). The decision reverses the trial court’s ruling delaying enforcement of the regulations until March 2024, which would have given businesses a one-year implementation period from the date final regulations were promulgated (covered by InfoBytes here).
The CPRA mandated the CPPA to finalize regulations on specific elements of the act by July 1, 2022, and provided that “the Agency’s enforcement authority would take effect on July 1, 2023,” a one-year gap between promulgation and enforcement. The CPPA did not issue final regulations until March of 2023, but sought to enforce the rules starting on the July 1, 2023, statutory date. In response, in March 2023, the Chamber of Commerce filed a lawsuit in state court seeking a one-year delay of enforcement for the new regulations. The trial court held that a delay was warranted because “voters intended there to be a gap between the passing of final regulations and enforcement of those regulations.” On appeal, the court emphasized that there is no explicit and unambiguous language in the law prohibiting the agency from enforcing the CPRA until at least one year after final regulations are approved, and that and found that while the mandatory dates included in the CPRA “amounts to a one-year delay,” such a delay was not mandated by the statutory language. The court further found that there is no indication from the ballot materials available to voters in passing the statute that the voters intended such a one-year delay. The court explained that the one-year gap between regulations could have been interpreted to give businesses time to comply, or as a period for the agency to prepare for enforcing the new rules, or there may also be other reasons for the gap.
Accordingly, the appellate court held that Chamber of Commerce “was simply not entitled to the relief granted by the trial court.” As a result of the court’s decision, businesses are now required to commence implementing the privacy regulations established by the agency.
NIST group releases drafts on TLS 1.3 best practices aimed at the financial industry
On January 30, the NIST National Cybersecurity Center of Excellence (NCCoE) released a draft practice guide, titled “Addressing Visibility Challenges with TLS 1.3 within the Enterprise.” The protocol in question, Transport Layer Security (TLS) 1.3, is the most recent iteration of the security protocol most widely used to protect communications over the Internet, but its implementation over TLS 1.2 (the prior version) remains challenging for major industries, including finance, that need to inspect incoming network traffic data for evidence of malware or other malicious activity. A full description of the project can be found here.
Compared to TLS 1.2, TLS 1.3 is faster and more secure, but the implementation of forward secrecy, i.e., protecting past sessions against compromises of keys or passwords used in future sessions, creates challenges related to data audit and legitimate inspection of network traffic. As a result, NIST released the practice guide to offer guidance on how to implement TLS 1.3 and meet required audit requirements without compromising the TLS 1.3 protocol itself. The practice guide suggests how businesses improve their technical methods, such as implementing passive inspection architecture either using “rotated bounded-lifetime [Diffie Helman] keys on the destination TLS server” or exported session keys, to support ongoing compliance with financial industry and other regulations––for continuous monitoring for malware and cyberattacks. The draft practice guide is currently under public review with Volumes A and B of the guide open until April 1, 2024. Volume A is a second preliminary draft of an Executive Summary and Volume B is a preliminary draft on the Approach, Architecture, and Security Characteristics.
EU Commission, Council, and Parliament agree on details of AI Act
On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act). The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.
Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards. Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).
Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.
NYDFS introduces guidelines for coin-listing and delisting policies in virtual currency entities
On November 15, NYDFS announced new regulatory guidance which adopts new requirements for coin-listing and delisting policies of DFS-regulated virtual currency entities, updating its 2020 framework for each policy. After considering public comments, the new guidance aims to enhance standards for self-certification of coins and includes requirements for risk assessment, advance notification, and governance. It emphasizes stricter criteria for approving coins and mandates adherence to safety, soundness, and consumer protection principles. Virtual currency entities must comply with these guidelines, requiring DFS approval for coin-listing policies before self-certifying coins, and submitting detailed records for ongoing compliance review. The guidance also outlines procedures for delisting coins and necessitates virtual currency entities to have an approved coin-delisting policy.
As an example under coin listing policy framework, the letter states that a virtual currency entity risk assessment must be tailored to a virtual currency entity's business activity and can include factors such as (i) technical design and technology risk; (ii) market and liquidity risk; (iii) operational risk; (iv) cybersecurity risk; (v) illicit finance risk; (vi) legal risk; (vii) reputational risk; (viii) regulatory risk; (ix) conflicts of interest; and (x) consumer protection. Regarding consumer protection, NYDFS says that virtual currency entities must “ensure that all customers are treated fairly and are afforded the full protection of all applicable laws and regulations, including protection from unfair, deceptive, or abusive practices.”
Similar to the listing policy framework, the letter provides a fulsome delisting policy framework. The letter also stated that all virtual currency entities must meet with the DFS by December 8 to preview their draft coin-delisting policies and that final policies must be submitted to DFS for approval by January 31, 2024.
Minnesota amends health care provision in extensive new law
On November 9, the State of Minnesota enacted Chapter 70--S.F.No. 2995, a large bill to amend certain sections of its current health care provisions. The bill covers extensive changes to healthcare provisions, from prescription contraceptives, hearing aids, mental health, long COVID, and childcare, among many others.
One of the significant new laws requires a hospital to first check if a patient’s bill is eligible for charity care before sending it off to a third-party collection agency. Further, the bill places new requirements on hospitals collecting on a medical debt before it can “garnish wages or bank accounts” of an individual. The Minnesota law also outlines how a hospital wishing to use a third-party collection agency, must first complete an affidavit attesting that it has checked if the patient is eligible for charity care, confirmed proper billing, given the patient the opportunity to apply for charity care, and, under certain circumstances, if the patient is unable to pay in one lump sum, offered a reasonable payment plan instead.