Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On September 17, the California attorney general announced a settlement with a technology company that operates a fertility-tracking mobile app to resolve claims that security flaws put users’ sensitive personal and medical information at risk in violation of state consumer protection and privacy laws. According to the complaint filed in the Superior Court for the County of San Francisco, the company’s app allegedly failed to adequately safeguard and preserve the confidentiality of medical information by, among other things, (i) allowing access to user information without the user’s consent, by failing to “authenticate the legitimacy of the user to whom the medical information was shared”; (ii) allowing a password-change vulnerability to permit unauthorized access and disclosure of information stored in the app without the user’s consent; (iii) making misleading statements concerning implemented security measures and the app’s ability to protect consumers’ sensitive personal and medical information from unauthorized disclosure; and (iv) failing to implement and maintain reasonable security procedures and practices.
Under the terms of the settlement, the company—which does not admit liability—is required to pay a $250,000 civil penalty and incorporate privacy and security design principles into its mobile apps. The company must also obtain affirmative authorization from users before sharing or disclosing sensitive personal and medical information, and must allow users to revoke previously granted consent. Additionally, the company is required to provide ongoing annual employee training concerning the proper handling and protection of sensitive personal and medical information, in addition to training on cyberstalking awareness and prevention. According to the AG’s press release, the settlement also includes “a first-ever injunctive term that requires [the company] to consider how privacy or security lapses may uniquely impact women.”
On September 15, the New York attorney general announced a settlement with a national franchisor of a coffee retail chain to resolve allegations that the company violated New York’s data breach notification statute and several state consumer protection laws by failing to protect thousands of customer accounts from a series of cyberattacks. As previously covered by InfoBytes, the AG claimed that, beginning in 2015, customer accounts containing stored value cards that could be used to make purchases in stores and online were subject to repeated cyberattack attempts, resulting in more than 20,000 compromised accounts and “tens of thousands” of dollars stolen. Following the attacks, the AG alleged that the company failed to take steps to protect the affected customers or to conduct an investigation to determine the extent of the attacks or implement appropriate safeguards to limit future attacks. The settlement, subject to court approval, would require the company to (i) notify affected customers, reset their passwords, and refund any stored value cards used without permission; (ii) pay $650,000 in penalties and costs; (iii) maintain safeguards to protect against similar attacks in the future; and (iv) develop and follow appropriate incident response procedures.
On August 19, the U.S. District Court for the Northern District of California granted preliminary approval of a $650 million biometric privacy settlement between a global social media company and a class of Illinois users. If granted final approval, the settlement would resolve consolidated class action claims that the social media company violated the Illinois Biometric Information Privacy Act (BIPA) by allegedly developing a face template that used facial-recognition technology without users’ consent. A lesser $550 million settlement deal filed in May (covered by InfoBytes here), was rejected by the court due to “concerns about an unduly steep discount on statutory damages under the BIPA, a conduct remedy that did not appear to require any meaningful changes by [the social media company], over-broad releases by the class, and the sufficiency of notice to class members.” The preliminarily approved settlement would also require the social medial company to provide nonmonetary injunctive relief by setting all default face recognition user settings to “off” and by deleting all existing and stored face templates for class members unless class members provide their express consent after receiving a separate disclosure on how the face template will be used.
On August 19, the U.S. District Court for the Southern District of Illinois denied defendants’ motion to dismiss claims that they unlawfully collected individuals’ biometric fingerprint data without first receiving informed consent. The court also addressed an argument as to whether the Illinois Biometric Information Privacy Act (BIPA) exemption for financial institutions violates the state’s constitution, ruling that the exemption applies only to institutions already subject to data protection standards of the Gramm-Leach-Bliley Act (GLBA) and therefore does not arbitrarily exempt financial institutions. According to the order, the plaintiff filed a putative class action against two companies (defendants) alleging they violated Section 15(b) of BIPA by unlawfully collecting employees’ biometric fingerprint data for timetracking purposes without informing employees in writing “of the purpose and period for which [their] fingerprints were being collected, stored, or used.” The plaintiff also claimed the defendants violated Section 15(a) of BIPA, which requires them to implement and follow a publically available biometric data retention and destruction schedule. The defendants filed a motion to dismiss, which presented several arguments, including that (i) the plaintiff failed to plead an actual injury and therefore lacked Article III standing; (ii) BIPA violates the state’s constitution because it imposes strict compliance requirements on certain entities but “arbitrarily” exempts “‘the entire financial industry’”; (iii) one of the defendants—a fingerprint database manager—qualifies as an exempt financial institution under BIPA; and (iv) the claims are time-barred and barred by waiver or equitable estoppel.
The court disagreed, allowing the plaintiff’s informed consent claims under Section 15(b) to proceed, noting, among other things, that BIPA’s financial institution exclusion is not “‘artificially narrow’ in its focus since both exempt and non-exempt financial institutions are subject to data reporting laws, with neither group receiving a benefit the other does not.” The court further noted that it has no indication in the pleading or declaration filed in motion practice that the fingerprint database manager defendant is a financial institution subject to the GLBA. However, the court remanded part of the suit back to state court. According to the court, the plaintiff’s Section 15(a) claims were not sufficient to establish Article III standing because this section “does not outline an entity’s duty to an individual” but rather “outlines a duty to the public generally.”
On August 5, the FTC Commissioners testified before the Senate Committee on Commerce, Science, and Transportation and discussed, among other things, the agency’s continued enforcement of the EU-U.S. Privacy Shield, despite the recent Court of Justice of the European Union (CJEU) invalidation of the framework, and their interest in federal data privacy legislation. As previously covered by InfoBytes, in July, the CJEU determined that because the requirements of U.S. national security, public interest and law enforcement have “primacy” over the data protection principles of the EU-U.S. Privacy Shield, the data transferred under the EU-U.S. Privacy Shield would not be subject to the same level of protections prescribed by the EU General Data Protection Regulation, and thus, declared the EU-U.S. Privacy Shield invalid.
In his opening remarks, Commissioner Simons emphasized that the FTC will “continue to hold companies accountable for their privacy commitments, including privacy promises made under the Privacy Shield,” which the FTC has also noted on its website. Additionally, Simons urged Congress to enact federal privacy and data security legislation, that would be enforced by the FTC and give the agency, among other things, the “ability to seek civil penalties” and “targeted [Administrative Procedures Act] rulemaking authority to ensure that the law keeps pace with changes and technology in the market.” Moreover, Commissioner Wilson agreed with a senator’s proposition that the enactment of a preemptive federal privacy framework would make “achieving a future adequacy determination by the E.U. easier.”
On July 16, the FCC issued an order adopting rules to further encourage phone companies to block illegal and unwanted robocalls and to continue the Commission’s implementation of the TRACED Act (covered by InfoBytes here). The rule establishes two safe harbors from liability for the unintended or inadvertent blocking of wanted calls: (i) voice service providers will not be held liable under the Communications Act and FCC rules on terminating voice service providers that block calls, provided “reasonable analytics,” such as caller ID authentication information, are used to identify and block illegal or unwanted calls; and (ii) voice service providers will not be held liable for blocking calls from “bad-actor upstream voice service providers that continue to allow unwanted calls to traverse their networks.” The FCC’s order also includes a Further Notice of Proposed Rulemaking seeking comments on, among other things, “whether to obligate originating and intermediate providers to better police their networks against illegal calls,” whether the “reasonable analytics” safe harbor should be expanded “to include network-based blocking without consumer opt-out,” and whether the Commission should adopt more extensive redress requirements, and require terminating providers to provide consumers information about blocked calls.
Court of Justice of the European Union invalidates EU-U.S. Privacy Shield; standard contractual clauses survive (for now)
On July 16, 2020, the Court of Justice of the European Union (CJEU) issued its opinion in the Schrems II case (Case C-311/18). In its opinion, the CJEU concluded that the Standard Contractual Clauses issued by the European Commission for the transfer of personal data to data processors established outside of the EU are valid. However, the Court invalidated the EU-U.S. Privacy Shield. The ruling cannot be appealed.
In 2015, a privacy campaigner named Max Schrems filed a complaint with Ireland’s Data Protection Commissioner challenging a global social media company’s use of data transfers from servers in Ireland to servicers in the U.S. Schrems argued that U.S. laws did not offer sufficient protection of EU customer data, that EU customer data might be at risk of being accessed and processed by the U.S. government once transferred, and that there was no remedy available to EU individuals to ensure protection of their personal data after transfer to the U.S. Schrems sought the suspension or prohibition of future data transfers, which were executed by the company through standard data protection contractual clauses (a method approved by the Court in 2010 by Decision 2010/87). The social media company had utilized these standard contractual clauses after the CJEU invalidated the U.S. – EU Safe Harbor Framework in 2015.
Following the complaint, Ireland’s Data Protection Commissioner brought proceedings against the social media company in the Irish High Court, which referred numerous questions to the CJEU for a preliminary ruling, including questions addressing the validity of the standard contractual clauses and the EU-U.S. Privacy Shield.
CJEU Opinion – Standard Contractual Clauses (Decision 2010/87)
Upon review of the recommendations from the CJEU’s Advocate General published on December 19, 2019, the CJEU found the Decision approving the use of contractual clauses to transfer personal data valid.
The CJEU noted that the GDPR applies to the transfer of personal data for commercial purposes by a company operating in an EU member state to another company outside of the EU, notwithstanding the third-party country’s processing of the data under its own security laws. Moreover, the CJEU explained that data protection contractual clauses between an EU company and a company operating in a third-party country must afford a level of protection “essentially equivalent to that which is guaranteed within the European Union” under the GDPR. According to the CJEU, the level of protection must take into consideration not only the contractual clauses executed by the companies, but the “relevant aspects of the legal system of that third country.”
As for the Decision 2010/87, the CJEU determined that it provides effective mechanisms to, in practice, ensure contractual clauses governing data transfers are in compliance with the level of protection requirement by the GDPR, and appropriately requires the suspension or prohibition of transfers in the event the clauses are breached or unable to be honored. The CJEU specifically highlighted the certification required by the EU data exporter and the third-party country recipient to verify, prior to any transfer, (i) the level of data protection in the third-party country prior to any transfer; and (ii) abilities to comply with the data protection clauses.
CJEU Opinion - EU-U.S. Privacy Shield, (Decision 2016/1250)
The CJEU decided to examine and rule on the validity of the EU – U.S. Privacy Shield. The CJEU determined that because the requirements of U.S. national security, public interest and law enforcement have “primacy” over the data protection principles of the EU-U.S. Privacy Shield, the data transferred under the EU-U.S. Privacy Shield would not be subject to the same level of protections prescribed by the GDPR. Specifically, the CJEU held that the surveillance programs used by U.S. authorities are not proportionally equivalent to those allowed under the EU law because they are not “limited to what is strictly necessary,” nor, under certain surveillance programs, does the U.S. “grant data subjects actionable rights before the courts against the U.S. authorities.” Moreover, the CJEU rejected the argument that the Ombudsperson mechanism satisfies the GDPR’s right to judicial protection, stating that it “does not provide any cause of action before a body which offers the persons whose data is transferred to the United States guarantees essentially equivalent to those required by [the GDPR],” and the Ombudsperson “cannot be regarded as a tribunal.” Thus, on those grounds, the CJEU declared the EU-U.S. Privacy Shield invalid.
The California attorney general recently published a set of frequently asked questions providing general consumer information on the California Consumer Privacy Act (CCPA). The CCPA—enacted in June 2018 (covered by a Buckley Special Alert) and amended several times—became effective January 1. Final proposed regulations were submitted by the AG last month as required under the CCPA’s July 1 statutory deadline (covered by InfoBytes here), and are currently with the California Office of Administrative Law for review. The FAQs—which will be updated periodically and do not serve as legal advice, regulatory guidance, or as an opinion of the AG—are intended to provide consumers guidance on exercising their rights under the CCPA.
- General CCPA information. The FAQs address consumer rights under the CCPA and reiterate that these rights apply only to California residents. This section also clarifies the definition of “personal information,” outlines businesses’ compliance thresholds, and states that the CCPA does not apply to nonprofit organizations and government agencies. The FAQs also remind consumers of their limited ability to sue businesses for CCPA violations and details the conditions that must be met before a consumer may sue a business for a data breach. The FAQs remind consumers that if they believe a business has violated the CCPA, they may file a complaint with the AG’s office.
- Right to opt-out of sale. The FAQs answer common questions related to consumers’ requests for businesses not to sell their personal information. The FAQs provide information on the steps for submitting opt-out requests, as well as explanations for why a business may deny an opt-out request. It also address circumstances where a consumer receives a response from a service provider that says it is not required to act on an opt-out request.
- Right to know. The FAQs discuss a consumer’s right to know what personal information is collected, used, shared, or sold, and clarifies what consumers should do to submit requests to know, how long a business may take to respond, and what steps should be taken if a business requests more information, denies a request to know, or claims to be a service provider that is not required to respond.
- Request to delete. The FAQs address several questions related to consumers’ right to delete personal information, including how to submit a request to delete, businesses’ responses to and denials of requests to delete, and why a debt collector may make an attempt to collect a debt or a credit reporting agency may provide credit information even after a request to delete has been made.
- Right to non-discrimination. Consumers are reminded that a business “cannot deny goods or services, charge. . .a different price, or provide a different level or quality of goods or services just because [a consumer] exercised [his or her] rights under the CCPA.”
- Data brokers. The FAQs set forth the definition of a data broker under California law and outline steps for consumers interested in finding data brokers that collect and sell personal information, as well as measures consumers can take to opt-out of the sale of certain personal information.
On June 4, the FTC announced that a children’s mobile application developer agreed to pay $150,000 and to delete the personal information it allegedly unlawfully collected from children under the age of 13 to resolve allegations that the developer violated the Children’s Online Privacy Protection Act Rule (COPPA Rule). According to the complaint filed in the U.S. District Court for the Northern District of California, the developer, without notifying parents or obtaining verifiable parental consent, allowed third-party advertising networks to use persistent identifiers to track users of the child-directed apps in order to send targeted advertisements to the children. The proposed settlement requires the developer to destroy any personal data collected from children under 13 and notify and obtain verifiable consent from parents for any child-directed app or website they offer that collects personal information from children under 13. A $4 million penalty is suspended upon the payment of $150,000 due to the developer’s inability to pay.
In dissent, Commissioner Phillips argued that the fine imposed against the developer was too high, noting that having children view advertisements based on the collection of persistent identifiers “is something; but it is not everything,” under COPPA. Commissioner Phillips argued that because the developer did not “share sensitive personal information about children, or publicize it” nor did the developer expose children “to unauthorized contact from strangers, or otherwise put [the children] in danger,” the assessed penalty was too large in comparison to the harm.
In response to the dissent, Chairman Simons argued that while “harm is an important factor to consider…[the FTC’s] first priority is to use  penalties to deter  practices. Even in the absence of demonstrable money harm, Congress has said that these law violations merit the imposition of civil penalties.”
On May 8, plaintiffs in a biometric privacy class action in the U.S. District Court for the Northern District of California filed a motion requesting preliminary approval of a $550 million settlement deal. The preliminary settlement, reached between a global social media company and a class of Illinois users, would resolve consolidated class claims that alleged the social media company’s face scanning practices violated the Illinois Biometric Information Privacy Act (BIPA). As previously covered by InfoBytes, last August the U.S. Court of Appeals for the 9th Circuit affirmed class certification and held that the class’s claims met the standing requirement described in Spokeo, Inc. v. Robins because the social media company’s alleged development of a face template that used facial-recognition technology without users’ consent constituted an invasion of an individual’s private affairs and concrete interests. According to the motion for preliminary approval, the settlement would be the largest BIPA class action settlement ever and would provide “cash relief that far outstrips what class members typically receive in privacy settlements, even in cases in which substantial statutory damages are involved.” If approved, the social media company must also provide “forward-looking relief” to ensure it secures users’ informed, written consent as required under BIPA.
- Daniel P. Stipano to discuss "High standards: Best practices for banking marijuana-related businesses" at the ACAMS AML & Anti-Financial Crime Conference
- Daniel P. Stipano to discuss "Wait wait ... do tell me! Where the panelists answer to you" at the ACAMS AML & Anti-Financial Crime Conference
- Matthew P. Previn and Walter E. Zalenski to discuss "Is valid when made ... valid?" at the Women in Housing & Finance Partner Series webinar
- Warren W. Traiger and Caroline K. Eisner to discuss "CRA modernization and the OCC final rule" at CBA Live
- Daniel R. Alonso to discuss "Transnational corruption: A chat with former U.S. federal prosecutors in New York" at Marval Live Talks
- Sherry-Maria Safchuk and Lauren Frank to discuss "New CFPB interpretation on UDAAP" at a California Mortgage Bankers Association Mortgage Quality and Compliance Committee webinar
- Thomas A. Sporkin to discuss "Managing internal investigations and advanced government defense" at the Securities Enforcement Forum
- H Joshua Kotin to discuss "Mortgage servicing in a recession: Early intervention, loss mitigation and more" at the NAFCU Virtual Regulatory Compliance Seminar
- Daniel R. Alonso to discuss "Independent monitoring in the United States" at the World Compliance Association Peru Chapter IV International Conference on Compliance and the Fight Against Corruption
- Jonice Gray Tucker to discuss "The future of fair lending" at the Mortgage Bankers Association Regulatory Compliance Conference
- Michelle L. Rogers to discuss "Major litigation" at the Mortgage Bankers Association Regulatory Compliance Conference
- Kathryn L. Ryan to discuss "Pandemic fallout – Navigating practical operational challenges" at the Mortgage Bankers Association Regulatory Compliance Conference
- Jonice Gray Tucker to discuss "Consumer financial services" at the Practising Law Institute Banking Law Institute