Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On January 12, the U.S. District Court for the District of Columbia ordered a law firm to produce a forensic report generated by a consultant retained by the firm’s outside counsel in the wake of the plaintiff’s data breach, concluding that the report and associated materials were neither protected work product nor attorney-client privileged. According to the order, as part of a malpractice action in which the plaintiff, a Chinese entrepreneur, accused the law firm of failing to protect his personal information from hackers, the plaintiff moved to compel the production of “‘all reports of its forensic investigation into the cyberattack’ that led to the public dissemination of [plaintiff]’s confidential information.” The law firm opposed the motion, arguing that it already had turned over all relevant internally generated materials and any other documents were protected by attorney-client and work-product privileges. The law firm argued that the forensic report was only one half of a two-tracked investigation of the incident. On one track, the law firm’s usual cybersecurity vendor worked to investigate the attack to preserve business continuity while on a separate track, a different consultant was retained by counsel for the sole purpose of assisting the law firm in gathering information necessary to render legal advice.
The district court disagreed, concluding that the report is not covered by work-product privilege because the law firm failed to show that the report “‘would [not] have been created in the ordinary course of business irrespective of litigation.’” The court noted that the forensic report summarizes the findings of the investigation and that substantially the same document would have been prepared in any event as part of the ordinary course of the law firm’s business. While seeming to endorse the idea of a two-track investigation, the court noted that the law firm failed to provide any evidence that supported the fact that there were actually two tracks. Among other things, the court noted that the report summarizes findings into the data breach’s “cause, nature, and effect” and was used “for a range of non-litigation purposes,” including being shared with members of the law firm’s leadership and IT team and the FBI. In addition, the court noted that there was no evidence that the law firm’s usual cybersecurity vendor produced any findings, let alone a comprehensive report about the incident. Instead, the court stated that the record suggested that two days after the cyberattack began, the law firm turned to this second consulting firm instead of rather than in addition to the first consulting firm. Moreover, the court rejected the application of attorney-client privilege, concluding that the law firm’s “true objective was gleaning [the security-consulting firm]’s expertise in cybersecurity, not in ‘obtaining legal advice from [its] lawyer.’” The court noted that the report included remediation advice, indicating the security firm was “engaged for immediate ‘incident response.’” Lastly, the court noted the law firm can safely respond to the plaintiff’s interrogatories calling for information regarding other clients impacted by the cyberattack with “appropriate redactions in responsive documents” and “tailored” answers.
On January 6, New York Assembly Bill A 27 was prefiled in the 2021-22 state legislative session, which would establish the Biometric Privacy Act and establish provisions regarding the retention, collection, disclosure and destruction of biometric identifiers or biometric information. Highlights of the bill include:
- Private entities in possession of biometric identifiers or information will be required to develop a written public policy “establishing a retention schedule and guidelines for permanently destroying biometric identifiers and information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within three years of the individual’s last interaction with the private entity, whichever occurs first.” Further, unless a private entity possesses a valid warrant or court subpoena, it must comply with its established retention schedule and destruction guidelines.
- Prior to obtaining a person’s biometric identifier or information, a private entity must inform the subject (or a subject’s legally authorized representative) in writing that the identifier or information is being collected or stored, the specific purpose and length of term for which it is being collected, stored, and used, and must receive a written release from the subject or legally authorized representative.
- Private entities may not sell, lease, trade, or otherwise profit from a person’s biometric identifier or information.
- Private entities may not disclose, redisclose, or otherwise disseminate such information unless (i) the subject provides consent; (ii) “the disclosure or redisclosure completes a financial transaction requested or authorized by the subject” or the subject’s legally authorized representative; or (iii) the information is required by a valid warrant or court subpoena.
- Private entities must take measures to store, transmit, and protect all biometric identifiers and information from disclosure “using the reasonable standard of care within the private entity’s industry” and “in a manner that is the same as or more protective than the manner in which the private entity stores, transmits, and protects other confidential and sensitive information.”
- The bill provides a private right of action for any person aggrieved by the bill’s provisions, including damages of $5,000 or actual damages (whichever is greater), reasonable attorneys’ fees and costs, and other relief including injunctive relief as deemed appropriate.
Notably, the New York Biometric Privacy Act is a close parallel to the Illinois Biometric Information Privacy Act, which was enacted in 2008.
On January 5, the Washington State Privacy Act, SB 5062, (referred to as “2021 WPA” or “bill”) was re-introduced for the 2021-22 state legislative session with some notable changes from the 2020 version. (InfoBytes coverage of the 2020 Washington Privacy Act, SB 6281, available here.) Highlights of the 2021 WPA include:
- Applicability. The bill will apply to legal entities that conduct business or produce products or services that are targeted to Washington consumers that also (i) control or process personal data for at least 100,000 consumers; or (ii) derive more than 25 percent of gross revenue from the sale of personal data, in addition to processing or controlling the personal data of at least 25,000 consumers (the 2020 version included a 50 percent gross revenue threshold). State and local governments, municipal corporations, certain protected health information, personal data governed by state and federal regulations, and employment records continue to be exempt from coverage. Additionally, the bill adds nonprofit corporations, air carriers, and institutions of higher education to the exemption list.
- Consumer rights. Consumers will be able to exercise the following rights concerning their personal data: access; correction; deletion; access in a portable format; and opt-out rights, including the right to opt out of the processing of personal data for targeted advertising and the sale of personal data.
- Controller responsibilities. Controllers required to comply with the bill will be responsible for (i) transparency in a privacy notice; (ii) limiting the collection of data to what is required and relevant for a specified purpose; (iii) ensuring data is not processed for reasons incompatible with a specified purpose; (iv) securing personal data from unauthorized access; (v) prohibiting processing that violates state or federal laws prohibiting unlawful discrimination against consumers; (vi) obtaining consumer consent in order to process sensitive data; and (vii) ensuring contracts and agreements do not contain provisions that waive or limit a consumer’s rights. Controllers must also conduct data protection assessments for all processing activities that involve personal data. Notably, the 2021 WPA removes the requirement from the 2020 legislation that controllers conduct additional assessments each time a processing change occurs that materially increases the risk to consumers.
- State attorney general. The bill explicitly precludes a private right of action but permits the state attorney general to bring actions and impose penalties of no more than $7,500 per violation. The bill removes the 2020 requirement that the AG submit a report evaluating the liability and enforcement provisions by 2022, but requires the AG to work in concert with the state’s office of privacy and data protection on a technology review report to be submitted to the governor by December 2022.
- Right to cure. The bill includes a new 30-day right to cure any alleged violation after a warning letter is sent by the AG identifying the specific provisions believed to have been violated.
- Preemption. Similar to the 2020 WPA, the bill would preempt local laws, ordinances, and regulations, but includes an exception for any laws, ordinances or regulations “regarding the processing of personal data by controllers or processors” that were adopted prior to July 1, 2020.
On January 12, the U.S. District Court for the Central District of California dismissed a data breach lawsuit brought against a hotel chain, ruling the plaintiff lacked standing. The plaintiff claimed class members were victims of a data breach when hotel employees at a franchise in Russia allegedly accessed personal information without authorization, including guests’ names, addresses, phone numbers, email addresses, genders, birth dates and loyalty account numbers. The plaintiff’s suit alleged, among other things, violations of the California Consumer Privacy Act and the state’s Unfair Competition Law. While the hotel disclosed the incident last March and admitted that class members’ personal information was compromised, the court determined that the plaintiff lacked standing to bring claims after the hotel’s investigation found that “no sensitive information, such as social security numbers, credit card information, or passwords, was compromised.” The court determined that the plaintiff failed to plausibly plead that any of the class members’ more sensitive data had fallen into the wrong hands, and that “[w]ithout a breach of this type of sensitive information, Plaintiff has not suffered an injury in fact and cannot meet the constitutional requirements of standing.”
On December 18, state attorneys general from Connecticut, Indiana, Kentucky, Michigan, New Jersey, New York and Oregon announced a $2 million settlement with an online retailer concerning allegations that the retailer failed to promptly and adequately respond to a 2019 data breach that compromised more than 22 million consumers’ personal information. According to the Assurance of Voluntary Compliance, the retailer failed to detect a data breach that allowed an unidentified attacker to obtain information including Social Security numbers and tax identification numbers. After learning about the vulnerability from a third-party security researcher, the retailer issued a patch to remediate the vulnerability and required users to reset passwords on their customer accounts. However, the AGs claim that the retailer took nearly six months to conduct a full investigation into whether its user database had been breached, and, after determining that users’ personal information was for sale on the dark web, later began notifying affected users of the breach.
In addition to paying $2 million to the AGs, which is partially suspended due to the retailer’s financial condition, the retailer—who has not admitted to the alleged violations—has agreed to (i) develop and implement a comprehensive information security program; (ii) design an incident response and data breach notification plan to encompass preparation, detection and analysis, containment, eradication, and recovery; (iii) ensure personal information safeguards and controls are in place, such as encryption, segmentation, penetration testing, risk assessment, password management, logging and monitoring, personal information deletion, and account closure notification; and (iv) ensure third-party security assessments occur biennially for the next five years.
On December 29, the U.S. District Court for the Northern District of California granted preliminary approval of a proposed settlement in a class action alleging a children’s clothing company and cloud technology service provider (collectively, “defendants”) violated, among other things, the California Consumer Privacy Act (CCPA) after suffering a data breach and potentially exposing customers’ personal information (PII) used to purchase products from the company’s website. After the company issued a notice of the security incident in January 2020, the plaintiffs filed the class action alleging the company failed to (i) “adequately protect its users’ PII”; (ii) “warn users of its inadequate information security practices”; and (iii) “effectively monitor [the company]’s website and ecommerce platform for security vulnerabilities and incidents.”
After mediation, the plaintiffs filed an unopposed motion for preliminary approval of class action settlement, which provides for a $400,000 settlement fund to cover approximately 200,000 class members who made purchases through the company’s website from September 16, 2019 to November 11, 2019. Class members have the option of claiming a cash payment of up to $500 for a Basic Award or of up to $5,000 for a Reimbursement Award, with amounts increasing or decreasing pro rata based on the number of claimants. Additionally, the company agreed to certain business practice changes, including conducting a risk assessment of its data assets and environment and enabling multi-factor authentication for all cloud services accounts. When granting preliminary approval, the court concluded that the agreement does “not improperly grant preferential treatment to any individual or segment of the Settlement Class and fall[s] within the range of possible approval as fair, reasonable, and adequate.”
On December 28, the Financial Crimes Enforcement Network (FinCEN) issued a notice to financial institutions concerning the potential for Covid-19 vaccine-related fraud, ransomware attacks, and other types of criminal activity. Specifically, FinCEN warns financial institutions to be aware of the potential sale of unapproved and illegally marketed vaccines, as well as fraudsters offering vaccines sooner than allowed for a fee. Financial institutions should also look out for ransomware targeting vaccine delivery operations and supply chains. The notice provides instructions for filing suspicious activity reports regarding the aforementioned activity.
On December 17, the U.S. Court of Appeals for the Ninth Circuit affirmed dismissal of a class action suit brought against an online payments firm and associated entities and individuals (collectively, “defendants”) for allegedly misleading investors (plaintiffs) about a 2017 data breach. As previously covered by InfoBytes, the district court concluded that, while the plaintiffs plausibly alleged the defendants’ November 2017 announcement about the data breach was misleading because it only disclosed a security vulnerability and did not disclose a breach that “potentially compromised” 1.6 million customers until a month later in December, plaintiffs failed to show that the defendants knew the breach had affected 1.6 million customers when they made the initial statement. Moreover, the court concluded the plaintiffs failed to allege that plaintiffs’ cybersecurity expert was familiar with, or had knowledge of, the defendants’ specific security setup or that he actually talked to the defendants’ employees about the breach.
On appeal, the 9th Circuit agreed with the district court, noting that the complaint lacked any allegation that the defendants had a motive to mislead investors in November, but not in December, such as the selling of stock during the relevant period. Thus, the appellate court could not conclude that the plaintiffs showed that the November announcement “was intentionally misleading or so obviously misleading that he must have been aware of its potential to mislead.” Therefore, the appellate court affirmed dismissal for failure to state a claim.
On December 16, the FTC announced a settlement with a Nevada-based travel emergency services provider, resolving allegations that the company violated the FTC Act by failing to implement a comprehensive security program to ensure the security of personal consumer information, including sensitive health information. According to the complaint, the company collected personal information from customers who signed up for membership plans and allegedly stored the unencrypted personal information on an unsecured cloud database, which could be accessed by anyone on the internet. The company also allegedly failed to perform vulnerability and penetration testing or conduct periodic risk assessments, and failed to monitor for unauthorized access to its network. In addition, the FTC claims that the company, once it was informed that its data was unsecure, represented that it immediately conducted an investigation and determined “[t]here was no medical or payment-related information visible and no indication that the information has been misused.” However, the FTC alleges that the company failed to, among other things, “examine the actual information stored in the cloud database, identify the consumers placed at risk by the exposure, or look for evidence of other unauthorized access to the database.” Instead, after confirming that the data was online and publicly accessible, the company deleted the database, the FTC claims.
The proposed settlement requires the company to, among other things, maintain safeguards to protect personal information, implement a comprehensive data security program, and undergo biennial assessments conducted by third party on the effectiveness of its program. The company is also prohibited from misrepresenting how it collects, maintains, secures, discloses, or deletes personal data, as well as whether it has been endorsed by or participates in any government- or third-party sponsored privacy or security program. The company will also be required to send a notice to affected consumers about its response to the security incident.
On December 15, the Irish Data Protection Commission (Commission) announced a final decision was reached in a General Data Protection Regulation (GDPR) investigation into a U.S.-based social networking tech company’s actions related to a 2019 data breach that affected users across the European Union. The final decision, published by the European Data Protection Board (EDPA), imposes a €450,000 fine against the company, and resolves an investigation in which the Commission alleged the company violated Articles 33(1) and 33(5) of the GDPR by failing to provide notice about the breach within a 72-hour period and by neglecting to adequately document the breach. According to the Commission, this inquiry is the first “dispute resolution” Article 65 decision (draft decision) under the GDPR, and marks the first decision issued against a “big tech” company. According to the final decision, “a number of concerned supervisory authorities raised objections” to aspects of the draft decision, taking issue, among other things, with the size of the proposed fine, which was originally set between €135,000 and €275,000. The EDPA determined that the objections were “relevant and reasoned” and instructed the Commission to increase the fine to ensure “it fulfils its purpose as a corrective measure and meets the requirements of effectiveness, dissuasiveness and proportionality” established under the GDPR.
- Magda Gathani to discuss "Cryptocurrency meets banks" at the Women in Housing & Finance Partner Series
- Garylene D. Javier to moderate "Innovation in an evolving privacy landscape" at the American Bar Association Business Law Section Consumer Financial Services Committee Winter Meeting
- Buckley Webcast: What’s next for privacy and data security in 2021 and beyond?