Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On January 28, the U.S. District Court for the Northern District of Illinois denied a motion to reconsider and a motion to certify questions for appeal and stay proceedings pending appeal in a matter concerning class claims that an auto leasing company and its parent company (collectively, “defendants”) violated the Illinois Biometric Information Privacy Act (BIPA) by unlawfully collecting biometric fingerprint data without first receiving informed consent. The court previously denied the defendants’ motion to dismiss after concluding the plaintiff stated a BIPA claim against both defendants. However, the auto leasing company argued, among other things, that the parent company should not be held liable because it was never the plaintiff’s employer, did not control her work environment, and had nothing to do with the fingerprint timekeeping system. The court disagreed, finding that under BIPA, the plaintiff’s allegations of the parent company were not “legal conclusions,” and “control over employee timekeeping and privacy  describes a relevant factual aspect of her personal experience working for defendants.” According to the court, “[t]his factual allegation raises the reasonable inference that [the parent company] administered the alleged fingerprint-scanning system, and in turn, plausibly suggests that [the parent company] collected, retained, and disseminated her fingerprints.” The parent company will have the opportunity to address alternative theories of liability while seeking summary judgment against the plaintiff or at trial, the court wrote.
On January 12, the U.S. District Court for the Central District of California dismissed a data breach lawsuit brought against a hotel chain, ruling the plaintiff lacked standing. The plaintiff claimed class members were victims of a data breach when hotel employees at a franchise in Russia allegedly accessed personal information without authorization, including guests’ names, addresses, phone numbers, email addresses, genders, birth dates and loyalty account numbers. The plaintiff’s suit alleged, among other things, violations of the California Consumer Privacy Act and the state’s Unfair Competition Law. While the hotel disclosed the incident last March and admitted that class members’ personal information was compromised, the court determined that the plaintiff lacked standing to bring claims after the hotel’s investigation found that “no sensitive information, such as social security numbers, credit card information, or passwords, was compromised.” The court determined that the plaintiff failed to plausibly plead that any of the class members’ more sensitive data had fallen into the wrong hands, and that “[w]ithout a breach of this type of sensitive information, Plaintiff has not suffered an injury in fact and cannot meet the constitutional requirements of standing.”
On December 15, the Irish Data Protection Commission (Commission) announced a final decision was reached in a General Data Protection Regulation (GDPR) investigation into a U.S.-based social networking tech company’s actions related to a 2019 data breach that affected users across the European Union. The final decision, published by the European Data Protection Board (EDPA), imposes a €450,000 fine against the company, and resolves an investigation in which the Commission alleged the company violated Articles 33(1) and 33(5) of the GDPR by failing to provide notice about the breach within a 72-hour period and by neglecting to adequately document the breach. According to the Commission, this inquiry is the first “dispute resolution” Article 65 decision (draft decision) under the GDPR, and marks the first decision issued against a “big tech” company. According to the final decision, “a number of concerned supervisory authorities raised objections” to aspects of the draft decision, taking issue, among other things, with the size of the proposed fine, which was originally set between €135,000 and €275,000. The EDPA determined that the objections were “relevant and reasoned” and instructed the Commission to increase the fine to ensure “it fulfils its purpose as a corrective measure and meets the requirements of effectiveness, dissuasiveness and proportionality” established under the GDPR.
On November 17, NYDFS announced a partnership with a non-profit company to provide a free cybersecurity toolkit to small businesses, including those in the financial services sector. The toolkit is intended to help small businesses strengthen their cybersecurity and to protect themselves and their customers from growing cyber threats. Operational tools and educational resources covered in the toolkit address “identifying hardware and software, updating defenses against cyber threats, strengthening passwords and multi-factor authentication, backing up and recovering data, and protecting email systems.” NYDFS’ partnership with the company also includes the development of a set of sample policies based on cybersecurity best practices to help small businesses install necessary governance and procedures. The sample policies include, among other things, a risk assessment and a sample third-party service provider policy. NYDFS advises small businesses to “review the tools and sample policies and to adapt them to their specific business risks and operations, including to comply with any applicable state and federal laws.”
On November 3, California voters approved a ballot initiative, the California Privacy Rights Act of 2020 (CPRA), that expands on the California Consumer Privacy Act (CCPA). While there are a number of differences between the CPRA and the CCPA, some key provisions include:
- Adding expanded consumer rights, including the right to correction and the right to limit sharing of personal information for cross-context behavioral advertising, whether or not for monetary or other valuable consideration.
- Changing the definitions of various entities, including increasing the numerical threshold for being a business to 100,000 from 50,000 consumers and households and removing devices from this threshold.
- Adding the category of sensitive personal information that is subject to specific rights.
- Creating a new privacy agency, the California Privacy Protection Agency, to administer, implement, and enforce the CPRA.
It is important to note that the Gramm-Leach-Bliley Act and Fair Credit Reporting Act exemptions are in the CPRA, and the act extends the employee and business-to-business exemption to January 1, 2023.
The CPRA becomes effective January 1, 2023, with enforcement delayed until July 1, 2023. However, the CPRA contains a look-back provision (i.e., the CPRA will apply to personal information collected by a business on or after January 1, 2022). The new privacy agency also is required to begin drafting regulations starting on July 1, 2021, with final regulations to be completed one year later.
Please refer to a Buckley article for further information on the differences between the CCPA and the CPRA: 6 Key Ways the California Privacy Rights Act of 2020 Would Revise the CCPA (Corporate Compliance Insights), as well a continuing InfoBytes coverage here.
On September 30, a multistate settlement was reached between a health insurance company and a collation of 42 state attorneys general and the District of Columbia to resolve a 2014 data breach that allegedly comprised the personal information of more than 78 million customers nationwide. According to the states, cyber attackers infiltrated the company’s systems using malware installed through a phishing email. The data breach resulted in the exposure of consumers’ social security numbers, birthdays, and other personal data. Under the terms of the settlement, the health insurer must pay $39.5 million in penalties and fees, and is required to (i) not misrepresent the extent of its privacy and security protections; (ii) implement a comprehensive information security program, including “regular security reporting to the Board of Directors and prompt notice of significant security events to the CEO”; (iii) implement specific security requirements, including “anti-virus maintenance, access controls and two-factor authentication, encryption, risk assessments, penetration testing, and employee training”; and (iv) schedule third-party assessments and audits for three years.
Separately, the California AG reached a $8.69 million settlement, subject to court approval, in a parallel investigation, which requires the health insurer to, among other things, implement changes to its information security program and fix vulnerabilities to prevent future data breaches.
Previously in 2018, the health insurer reached a $115 million class action settlement, which provided for two years of credit monitoring, reimbursement of out-of-pocket costs related to the breach, and alternative cash payment for credit monitoring services already obtained (covered by InfoBytes here).
On September 17, the California attorney general announced a settlement with a technology company that operates a fertility-tracking mobile app to resolve claims that security flaws put users’ sensitive personal and medical information at risk in violation of state consumer protection and privacy laws. According to the complaint filed in the Superior Court for the County of San Francisco, the company’s app allegedly failed to adequately safeguard and preserve the confidentiality of medical information by, among other things, (i) allowing access to user information without the user’s consent, by failing to “authenticate the legitimacy of the user to whom the medical information was shared”; (ii) allowing a password-change vulnerability to permit unauthorized access and disclosure of information stored in the app without the user’s consent; (iii) making misleading statements concerning implemented security measures and the app’s ability to protect consumers’ sensitive personal and medical information from unauthorized disclosure; and (iv) failing to implement and maintain reasonable security procedures and practices.
Under the terms of the settlement, the company—which does not admit liability—is required to pay a $250,000 civil penalty and incorporate privacy and security design principles into its mobile apps. The company must also obtain affirmative authorization from users before sharing or disclosing sensitive personal and medical information, and must allow users to revoke previously granted consent. Additionally, the company is required to provide ongoing annual employee training concerning the proper handling and protection of sensitive personal and medical information, in addition to training on cyberstalking awareness and prevention. According to the AG’s press release, the settlement also includes “a first-ever injunctive term that requires [the company] to consider how privacy or security lapses may uniquely impact women.”
On September 15, the New York attorney general announced a settlement with a national franchisor of a coffee retail chain to resolve allegations that the company violated New York’s data breach notification statute and several state consumer protection laws by failing to protect thousands of customer accounts from a series of cyberattacks. As previously covered by InfoBytes, the AG claimed that, beginning in 2015, customer accounts containing stored value cards that could be used to make purchases in stores and online were subject to repeated cyberattack attempts, resulting in more than 20,000 compromised accounts and “tens of thousands” of dollars stolen. Following the attacks, the AG alleged that the company failed to take steps to protect the affected customers or to conduct an investigation to determine the extent of the attacks or implement appropriate safeguards to limit future attacks. The settlement, subject to court approval, would require the company to (i) notify affected customers, reset their passwords, and refund any stored value cards used without permission; (ii) pay $650,000 in penalties and costs; (iii) maintain safeguards to protect against similar attacks in the future; and (iv) develop and follow appropriate incident response procedures.
On August 19, the U.S. District Court for the Northern District of California granted preliminary approval of a $650 million biometric privacy settlement between a global social media company and a class of Illinois users. If granted final approval, the settlement would resolve consolidated class action claims that the social media company violated the Illinois Biometric Information Privacy Act (BIPA) by allegedly developing a face template that used facial-recognition technology without users’ consent. A lesser $550 million settlement deal filed in May (covered by InfoBytes here), was rejected by the court due to “concerns about an unduly steep discount on statutory damages under the BIPA, a conduct remedy that did not appear to require any meaningful changes by [the social media company], over-broad releases by the class, and the sufficiency of notice to class members.” The preliminarily approved settlement would also require the social medial company to provide nonmonetary injunctive relief by setting all default face recognition user settings to “off” and by deleting all existing and stored face templates for class members unless class members provide their express consent after receiving a separate disclosure on how the face template will be used.
On August 19, the U.S. District Court for the Southern District of Illinois denied defendants’ motion to dismiss claims that they unlawfully collected individuals’ biometric fingerprint data without first receiving informed consent. The court also addressed an argument as to whether the Illinois Biometric Information Privacy Act (BIPA) exemption for financial institutions violates the state’s constitution, ruling that the exemption applies only to institutions already subject to data protection standards of the Gramm-Leach-Bliley Act (GLBA) and therefore does not arbitrarily exempt financial institutions. According to the order, the plaintiff filed a putative class action against two companies (defendants) alleging they violated Section 15(b) of BIPA by unlawfully collecting employees’ biometric fingerprint data for timetracking purposes without informing employees in writing “of the purpose and period for which [their] fingerprints were being collected, stored, or used.” The plaintiff also claimed the defendants violated Section 15(a) of BIPA, which requires them to implement and follow a publically available biometric data retention and destruction schedule. The defendants filed a motion to dismiss, which presented several arguments, including that (i) the plaintiff failed to plead an actual injury and therefore lacked Article III standing; (ii) BIPA violates the state’s constitution because it imposes strict compliance requirements on certain entities but “arbitrarily” exempts “‘the entire financial industry’”; (iii) one of the defendants—a fingerprint database manager—qualifies as an exempt financial institution under BIPA; and (iv) the claims are time-barred and barred by waiver or equitable estoppel.
The court disagreed, allowing the plaintiff’s informed consent claims under Section 15(b) to proceed, noting, among other things, that BIPA’s financial institution exclusion is not “‘artificially narrow’ in its focus since both exempt and non-exempt financial institutions are subject to data reporting laws, with neither group receiving a benefit the other does not.” The court further noted that it has no indication in the pleading or declaration filed in motion practice that the fingerprint database manager defendant is a financial institution subject to the GLBA. However, the court remanded part of the suit back to state court. According to the court, the plaintiff’s Section 15(a) claims were not sufficient to establish Article III standing because this section “does not outline an entity’s duty to an individual” but rather “outlines a duty to the public generally.”
- Daniel R. Alonso to discuss "How to become an AUSA" at the New York City Bar Association Minorities in the Courts Committee “How To” series
- Michelle L. Rogers and Kathryn L. Ryan to discuss “Fintech U.S. expansion” at the Tech Nation 3.0 cohort meeting
- Melissa Klimkiewicz to discuss "Flood insurance basics" at the NAFCU Virtual Regulatory Compliance School
- Jonice Gray Tucker to discuss "Compliance under Biden" at the WSJ Risk & Compliance Forum