Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On November 15, NYDFS announced new regulatory guidance which adopts new requirements for coin-listing and delisting policies of DFS-regulated virtual currency entities, updating its 2020 framework for each policy. After considering public comments, the new guidance aims to enhance standards for self-certification of coins and includes requirements for risk assessment, advance notification, and governance. It emphasizes stricter criteria for approving coins and mandates adherence to safety, soundness, and consumer protection principles. Virtual currency entities must comply with these guidelines, requiring DFS approval for coin-listing policies before self-certifying coins, and submitting detailed records for ongoing compliance review. The guidance also outlines procedures for delisting coins and necessitates virtual currency entities to have an approved coin-delisting policy.
As an example under coin listing policy framework, the letter states that a virtual currency entity risk assessment must be tailored to a virtual currency entity's business activity and can include factors such as (i) technical design and technology risk; (ii) market and liquidity risk; (iii) operational risk; (iv) cybersecurity risk; (v) illicit finance risk; (vi) legal risk; (vii) reputational risk; (viii) regulatory risk; (ix) conflicts of interest; and (x) consumer protection. Regarding consumer protection, NYDFS says that virtual currency entities must “ensure that all customers are treated fairly and are afforded the full protection of all applicable laws and regulations, including protection from unfair, deceptive, or abusive practices.”
Similar to the listing policy framework, the letter provides a fulsome delisting policy framework. The letter also stated that all virtual currency entities must meet with the DFS by December 8 to preview their draft coin-delisting policies and that final policies must be submitted to DFS for approval by January 31, 2024.
On November 9, the State of Minnesota enacted Chapter 70--S.F.No. 2995, a large bill to amend certain sections of its current health care provisions. The bill covers extensive changes to healthcare provisions, from prescription contraceptives, hearing aids, mental health, long COVID, and childcare, among many others.
One of the significant new laws requires a hospital to first check if a patient’s bill is eligible for charity care before sending it off to a third-party collection agency. Further, the bill places new requirements on hospitals collecting on a medical debt before it can “garnish wages or bank accounts” of an individual. The Minnesota law also outlines how a hospital wishing to use a third-party collection agency, must first complete an affidavit attesting that it has checked if the patient is eligible for charity care, confirmed proper billing, given the patient the opportunity to apply for charity care, and, under certain circumstances, if the patient is unable to pay in one lump sum, offered a reasonable payment plan instead.
Draft risk assessment regulations and cybersecurity audit regulations were released in advance of the September 8 open meeting held by the board. Draft regulations on automated decision-making remain to be published. More comprehensive comment and feedback is expected on these draft regulations, unlike regulations finalized in March that were presented in a more robust state. As previously covered by InfoBytes, the California Privacy Protection Agency cannot enforce any regulations until a year after their finalization, adding a ticking reminder to the finalization process for these draft regulations.
The draft cybersecurity regulations include thoroughness requirements for the annual cybersecurity audit, which must also be completed “using a qualified, objective, independent professional” and “procedures and standards generally accepted in the profession of auditing.” A management certification must also be signed certifying the business has not influenced the audit, and has reviewed the audit and understands its findings.
The draft risk assessment regulations require conducting a risk assessment prior to initiating processing of consumers’ personal information that “presents significant risk to consumers’ privacy,” as set forth in an enumerated list include the selling or sharing of personal information; processing personal information of consumers under age 16; and using certain automated decision-making technology, including AI.
On June 20, the CFPB released a statement announcing it will be “embarking on an inquiry into the data broker industry and issues raised by new technological developments.” The Bureau requested information in March about entities that purchase information from data brokers, the negative impacts of data broker practices, and the issues consumers face when they wish to see or correct their personal information. (Covered by InfoBytes here.) The findings from this inquiry will help the Bureau understand how employees’ personal information can find its way into the data broker market.
With similar intentions, the White House Office of Science and Technology Policy (OSTP) released a request for information (RFI) to learn more about the automated tools employers use to monitor, screen, surveil, and manage their employees. The OSTP blog post cited to an increase in the use of technologies that handle employees’ sensitive information and data. The OSTP also highlighted the Biden administration’s Blueprint for an AI Bill of Rights (covered by InfoBytes here), which underscored the importance of building in protections when developing new technologies and understanding associated risks. Responses to the RFI will be used to “inform new policy responses, share relevant research, data, and findings with the public, and amplify best practices among employers, worker organizations, technology vendors, developers, and others in civil society,” the OSTP said.
The CFPB’s response to the RFI described the agency’s concerns regarding risks to employees’ privacy, noting that it has long received complaints from the public about the lack of transparency and inaccuracies in the employment screening industry. Specifically mentioned are FCRA protections for consumers and guidelines around the sale of personal data. The Bureau also commented that employees may not be at liberty to determine how their information is used, or sold, and have no opportunity for recourse when inaccurately reported information affects their earnings, access to credit, ability to rent a home or buy a car, and more.
On June 14, the OCC released its Semiannual Risk Perspective for Spring 2023, which reports on key risks threatening the safety and soundness of national banks, federal savings associations, and federal branches and agencies. The agency reported that the overall strength of the federal banking system is sound but warned banks to remain diligent and maintain effective risk management practices over critical functions in order to withstand current and future economic and financial challenges.
The OCC highlighted liquidity, operational, credit, and compliance risk as key risk themes in the report. Observations include: (i) in response to recent bank failures and investment portfolio depreciation, liquidity levels have been strengthened; (ii) credit risk remains moderate, however in certain commercial real estate segments, signs of stress are increasing (high inflation and rising interest rates are also causing credit conditions to deteriorate); (iii) operational risk, including persistent cyber threats, is elevated, while opportunities and risks are created by banks’ increased use of third parties and the digitalization of banking products and service; and (iv) compliance risk remains heightened as banks continue to navigate a dynamic environment where compliance management systems try to keep pace with evolving products, services, and delivery channel offerings.
The report also discussed challenges banks face when trying to manage climate-related financial risks, as well as the importance of investing and aligning technology with banks’ business goals. Acting Comptroller of the Currency Michael Hsu urged banks “to ‘be on the balls of their feet’ with regards to risk management” and “guard against complacency.”
The U.S. Court of Appeals for the Eleventh Circuit recently reversed the dismissal of a negligence claim brought against a Georgia-based airport retailer, determining that a company of its size and sophistication “could have foreseen being the target of a cyberattack.” Plaintiff, who used to work for the defendant, filed suit alleging the defendant failed to protect thousands of current and former employees’ sensitive personally identifiable information (PII), including Social Security numbers, from an October 2020 ransomware attack. Bringing claims for negligence and breach of implied contract on behalf of class members, plaintiff contended that not only should the defendant have protected the PII, but it also took several months for the defendant to notify affected individuals. A notice provided by the company claimed the attack only affected an internal, administrative system, but according to the plaintiff, the attacker uploaded the PII to third-party servers. Plaintiff was later informed that an unknown party used his Social Security number to file pandemic-related unemployment assistance claims under his name in Rhode Island and Kentucky. Plaintiff challenged that the defendant should have taken steps before the hack to better protect the information and that the alleged “harms he suffered were a foreseeable result of [defendant’s] inadequate security practices and its failure to comply with industry standards appropriate to the nature of the sensitive, unencrypted information it was maintaining.” The district court disagreed and granted defendant’s motion to dismiss for failure to state a claim. Plaintiff appealed, arguing that “the district court demanded too much at the pleadings stage.”
On appeal, the 11th Circuit concluded, among other things, that the plaintiff could not have been expected to plead details about the defendant’s private data security policies. “We cannot expect a plaintiff in [this] position to plead with exacting detail every aspect of [defendant’s] security history and procedures that might make a data breach foreseeable, particularly where ‘the question of reasonable foreseeability of a criminal attack is generally for a jury’s determination rather than summary adjudication by the courts,’” the appellate court wrote, noting that plaintiff had sufficiently pled the existence of a special relationship as well as a foreseeable risk of harm. However, the 11th Circuit affirmed dismissal of plaintiff’s claim for breach of implied contract, stating that he failed to allege any facts showing that the defendant agreed to be bound by a data retention or protection policy.
A few days later, the 11th Circuit issued an opinion saying class members in a different action should be allowed to amend their data breach negligence claim in light of the appellate court’s decision discussed above. The 11th Circuit wrote that the decision in the aforementioned case “undermined” the dismissal of plaintiff’s negligence claim alleging a defendant warehousing company allowed a data breach to occur because it failed to take appropriate measures to secure its network. Class members in this case also alleged their PII was improperly accessed during a ransomware attack. The appellate court agreed with class members’ contention that the defendant had failed to address a newly created legal standard for data breach negligence claims in its motion to dismiss: “Indeed, the plaintiffs would have been hard-pressed to predict that they might need to amend their complaint to add more specific foreseeability allegations in response to [defendant’s] renewed motion to dismiss,” the appellate court wrote, reversing the denial of the motion for leave to amend.
On December 12, the U.S. District Court for the Northern District of California granted a defendant’s motion for summary judgment in a suit alleging that it collected consumers’ data without first obtaining their consent. According to the opinion, the plaintiffs are users of the defendant’s browser who alleged that they chose not to sync their browsers with the defendant’s accounts while browsing the web from July 2016 to the present. The complaint further noted that the browser’s sync feature permits “users to store their personal information by logging into the browser with their [defendant’s] account.” The district court granted the defendant’s motion for summary judgment after determining that most of the issues are “browser agnostic” rather than specific to the browser. Furthermore, the district court determined that because those issues are not specific to the browser, the defendant’s general privacy policies “governs the collection of those categories of information identified by plaintiffs.” The district court also found that “a reasonable person viewing those disclosures would understand that [the defendant] maintains the practices of collecting its users' data when users use [the defendant’s] services or third-party sites that use [the defendant’s] services and that [the defendant] uses the data for advertising purposes.” The district court also noted that “a reasonable user reviewing these same disclosures would understand that [the defendant] combines and links this information across sites and services for targeted advertising purposes.”
On February 17, the U.S. District Court for the District of Delaware granted a motion to dismiss a putative class action suit for lack of Article III standing, in which plaintiffs alleged that the defendant violated their privacy rights by intercepting and recording mouse clicks and other website visit information. According to the memorandum opinion, the plaintiffs alleged defendant’s recording of that information violated, among other things, the California Invasion of Privacy Act (CIPA) and the Federal Wiretap Act. In finding the plaintiffs’ failed to plead a concrete injury, the district court found while the “[p]laintiffs have a legally cognizable interest in controlling their personal information and that intrusion upon that interest would amount to a concrete injury[,]” they failed to identify how any of their personal information was implicated in the complaint. The court explained: “[p]laintiffs fail to explain how either [the defendants] possession of anonymized, non-personal data regarding their browsing activities on [the defendant’s] website harms their privacy interests in any way.” The district court also noted that the plaintiffs did not make any allegations to suggest a risk of imminent or substantial future harm.
On December 15, the acting New Jersey attorney general and the Division of Consumer Affairs reached a settlement with three New Jersey-based medical providers for allegedly violating the New Jersey Consumer Fraud Act and the federal Health Insurance Portability and Accountability Act (HIPAA) by failing to adequately safeguard patient data. The settlement resolved allegations that patients’ personal and protected health information, including health records, driver’s license numbers, Social Security numbers, financial account numbers, and payment card numbers, were exposed when several employee email accounts were compromised in a 2019 data breach. The AG additionally contended that while notifying clients of the initial data breach, the defendants “improperly disclosed patient data when a third-party vendor improperly mailed notification letters intended for 13,047 living patients by addressing the letters to those patients’ prospective next-of-kin.” Federal and state law require medical providers to implement appropriate safeguards to protect consumers’ sensitive health and personal information and identify potential threats—measures, the AG alleged, the defendants failed to take. Without admitting to any violation of law, the defendants agreed to the terms of the consent order and will pay $353,820 in penalties and $71,180 in attorneys’ fees and investigative costs. The defendants will also adopt additional comprehensive privacy and security measures to safeguard consumers’ protected information and will obtain a third-party assessment of their policies and practices related “to the collection, storage, maintenance, transmission, and disposal of patient data.”