Skip to main content
Menu Icon Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • NYDFS announces cybersecurity toolkit for small businesses

    Privacy, Cyber Risk & Data Security

    On November 17, NYDFS announced a partnership with a non-profit company to provide a free cybersecurity toolkit to small businesses, including those in the financial services sector. The toolkit is intended to help small businesses strengthen their cybersecurity and to protect themselves and their customers from growing cyber threats. Operational tools and educational resources covered in the toolkit address “identifying hardware and software, updating defenses against cyber threats, strengthening passwords and multi-factor authentication, backing up and recovering data, and protecting email systems.” NYDFS’ partnership with the company also includes the development of a set of sample policies based on cybersecurity best practices to help small businesses install necessary governance and procedures. The sample policies include, among other things, a risk assessment and a sample third-party service provider policy. NYDFS advises small businesses to “review the tools and sample policies and to adapt them to their specific business risks and operations, including to comply with any applicable state and federal laws.”  

    Privacy/Cyber Risk & Data Security State Issues State Regulator NYDFS

    Share page with AddThis
  • California voters approve expanded privacy rights

    Privacy, Cyber Risk & Data Security

    On November 3, California voters approved a ballot initiative, the California Privacy Rights Act of 2020 (CPRA), that expands on the California Consumer Privacy Act (CCPA). While there are a number of differences between the CPRA and the CCPA, some key provisions include:

    • Adding expanded consumer rights, including the right to correction and the right to limit sharing of personal information for cross-context behavioral advertising, whether or not for monetary or other valuable consideration.
    • Changing the definitions of various entities, including increasing the numerical threshold for being a business to 100,000 from 50,000 consumers and households and removing devices from this threshold.
    • Adding the category of sensitive personal information that is subject to specific rights.
    • Creating a new privacy agency, the California Privacy Protection Agency, to administer, implement, and enforce the CPRA.

    It is important to note that the Gramm-Leach-Bliley Act and Fair Credit Reporting Act exemptions are in the CPRA, and the act extends the employee and business-to-business exemption to January 1, 2023.

    Implementation deadlines

    The CPRA becomes effective January 1, 2023, with enforcement delayed until July 1, 2023. However, the CPRA contains a look-back provision (i.e., the CPRA will apply to personal information collected by a business on or after January 1, 2022). The new privacy agency also is required to begin drafting regulations starting on July 1, 2021, with final regulations to be completed one year later.

    Learn more

    Please refer to a Buckley article for further information on the differences between the CCPA and the CPRA: 6 Key Ways the California Privacy Rights Act of 2020 Would Revise the CCPA (Corporate Compliance Insights), as well a continuing InfoBytes coverage here.

    Privacy/Cyber Risk & Data Security CCPA CPRA California Consumer Protection Ballot Initiative

    Share page with AddThis
  • Health insurer to pay $48 million to resolve 2014 data breach

    Privacy, Cyber Risk & Data Security

    On September 30, a multistate settlement was reached between a health insurance company and a collation of 42 state attorneys general and the District of Columbia to resolve a 2014 data breach that allegedly comprised the personal information of more than 78 million customers nationwide. According to the states, cyber attackers infiltrated the company’s systems using malware installed through a phishing email. The data breach resulted in the exposure of consumers’ social security numbers, birthdays, and other personal data. Under the terms of the settlement, the health insurer must pay $39.5 million in penalties and fees, and is required to (i) not misrepresent the extent of its privacy and security protections; (ii) implement a comprehensive information security program, including “regular security reporting to the Board of Directors and prompt notice of significant security events to the CEO”; (iii) implement specific security requirements, including “anti-virus maintenance, access controls and two-factor authentication, encryption, risk assessments, penetration testing, and employee training”; and (iv) schedule third-party assessments and audits for three years.

    Separately, the California AG reached a $8.69 million settlement, subject to court approval, in a parallel investigation, which requires the health insurer to, among other things, implement changes to its information security program and fix vulnerabilities to prevent future data breaches.

    Previously in 2018, the health insurer reached a $115 million class action settlement, which provided for two years of credit monitoring, reimbursement of out-of-pocket costs related to the breach, and alternative cash payment for credit monitoring services already obtained (covered by InfoBytes here).

    Privacy/Cyber Risk & Data Security Courts Settlement Data Breach State Issues State Attorney General

    Share page with AddThis
  • California AG enters into privacy settlement with fertility-tracking mobile app

    Privacy, Cyber Risk & Data Security

    On September 17, the California attorney general announced a settlement with a technology company that operates a fertility-tracking mobile app to resolve claims that security flaws put users’ sensitive personal and medical information at risk in violation of state consumer protection and privacy laws. According to the complaint filed in the Superior Court for the County of San Francisco, the company’s app allegedly failed to adequately safeguard and preserve the confidentiality of medical information by, among other things, (i) allowing access to user information without the user’s consent, by failing to “authenticate the legitimacy of the user to whom the medical information was shared”; (ii) allowing a password-change vulnerability to permit unauthorized access and disclosure of information stored in the app without the user’s consent; (iii) making misleading statements concerning implemented security measures and the app’s ability to protect consumers’ sensitive personal and medical information from unauthorized disclosure; and (iv) failing to implement and maintain reasonable security procedures and practices.

    Under the terms of the settlement, the company—which does not admit liability—is required to pay a $250,000 civil penalty and incorporate privacy and security design principles into its mobile apps. The company must also obtain affirmative authorization from users before sharing or disclosing sensitive personal and medical information, and must allow users to revoke previously granted consent. Additionally, the company is required to provide ongoing annual employee training concerning the proper handling and protection of sensitive personal and medical information, in addition to training on cyberstalking awareness and prevention. According to the AG’s press release, the settlement also includes “a first-ever injunctive term that requires [the company] to consider how privacy or security lapses may uniquely impact women.”

    Privacy/Cyber Risk & Data Security Courts Settlement Data Breach State Issues State Attorney General

    Share page with AddThis
  • New York AG settles data breach lawsuit with national coffee chain

    Privacy, Cyber Risk & Data Security

    On September 15, the New York attorney general announced a settlement with a national franchisor of a coffee retail chain to resolve allegations that the company violated New York’s data breach notification statute and several state consumer protection laws by failing to protect thousands of customer accounts from a series of cyberattacks. As previously covered by InfoBytes, the AG claimed that, beginning in 2015, customer accounts containing stored value cards that could be used to make purchases in stores and online were subject to repeated cyberattack attempts, resulting in more than 20,000 compromised accounts and “tens of thousands” of dollars stolen. Following the attacks, the AG alleged that the company failed to take steps to protect the affected customers or to conduct an investigation to determine the extent of the attacks or implement appropriate safeguards to limit future attacks. The settlement, subject to court approval, would require the company to (i) notify affected customers, reset their passwords, and refund any stored value cards used without permission; (ii) pay $650,000 in penalties and costs; (iii) maintain safeguards to protect against similar attacks in the future; and (iv) develop and follow appropriate incident response procedures.

    Privacy/Cyber Risk & Data Security Courts Settlement Data Breach State Issues

    Share page with AddThis
  • District court preliminarily approves $650 million biometric privacy class action settlement

    Privacy, Cyber Risk & Data Security

    On August 19, the U.S. District Court for the Northern District of California granted preliminary approval of a $650 million biometric privacy settlement between a global social media company and a class of Illinois users. If granted final approval, the settlement would resolve consolidated class action claims that the social media company violated the Illinois Biometric Information Privacy Act (BIPA) by allegedly developing a face template that used facial-recognition technology without users’ consent. A lesser $550 million settlement deal filed in May (covered by InfoBytes here), was rejected by the court due to “concerns about an unduly steep discount on statutory damages under the BIPA, a conduct remedy that did not appear to require any meaningful changes by [the social media company], over-broad releases by the class, and the sufficiency of notice to class members.” The preliminarily approved settlement would also require the social medial company to provide nonmonetary injunctive relief by setting all default face recognition user settings to “off” and by deleting all existing and stored face templates for class members unless class members provide their express consent after receiving a separate disclosure on how the face template will be used.

    Privacy/Cyber Risk & Data Security Courts BIPA Class Action Settlement

    Share page with AddThis
  • District court: BIPA does not violate Illinois constitution

    Privacy, Cyber Risk & Data Security

    On August 19, the U.S. District Court for the Southern District of Illinois denied defendants’ motion to dismiss claims that they unlawfully collected individuals’ biometric fingerprint data without first receiving informed consent. The court also addressed an argument as to whether the Illinois Biometric Information Privacy Act (BIPA) exemption for financial institutions violates the state’s constitution, ruling that the exemption applies only to institutions already subject to data protection standards of the Gramm-Leach-Bliley Act (GLBA) and therefore does not arbitrarily exempt financial institutions. According to the order, the plaintiff filed a putative class action against two companies (defendants) alleging they violated Section 15(b) of BIPA by unlawfully collecting employees’ biometric fingerprint data for timetracking purposes without informing employees in writing “of the purpose and period for which [their] fingerprints were being collected, stored, or used.” The plaintiff also claimed the defendants violated Section 15(a) of BIPA, which requires them to implement and follow a publically available biometric data retention and destruction schedule. The defendants filed a motion to dismiss, which presented several arguments, including that (i) the plaintiff failed to plead an actual injury and therefore lacked Article III standing; (ii) BIPA violates the state’s constitution because it imposes strict compliance requirements on certain entities but “arbitrarily” exempts “‘the entire financial industry’”; (iii) one of the defendants—a fingerprint database manager—qualifies as an exempt financial institution under BIPA; and (iv) the claims are time-barred and barred by waiver or equitable estoppel.

    The court disagreed, allowing the plaintiff’s informed consent claims under Section 15(b) to proceed, noting, among other things, that BIPA’s financial institution exclusion is not “‘artificially narrow’ in its focus since both exempt and non-exempt financial institutions are subject to data reporting laws, with neither group receiving a benefit the other does not.” The court further noted that it has no indication in the pleading or declaration filed in motion practice that the fingerprint database manager defendant is a financial institution subject to the GLBA. However, the court remanded part of the suit back to state court. According to the court, the plaintiff’s Section 15(a) claims were not sufficient to establish Article III standing because this section “does not outline an entity’s duty to an individual” but rather “outlines a duty to the public generally.”

    Privacy/Cyber Risk & Data Security Courts BIPA State Issues

    Share page with AddThis
  • FTC continues to enforce Privacy Shield

    Privacy, Cyber Risk & Data Security

    On August 5, the FTC Commissioners testified before the Senate Committee on Commerce, Science, and Transportation and discussed, among other things, the agency’s continued enforcement of the EU-U.S. Privacy Shield, despite the recent Court of Justice of the European Union (CJEU) invalidation of the framework, and their interest in federal data privacy legislation. As previously covered by InfoBytes, in July, the CJEU determined that because the requirements of U.S. national security, public interest and law enforcement have “primacy” over the data protection principles of the EU-U.S. Privacy Shield, the data transferred under the EU-U.S. Privacy Shield would not be subject to the same level of protections prescribed by the EU General Data Protection Regulation, and thus, declared the EU-U.S. Privacy Shield invalid.

    In his opening remarks, Commissioner Simons emphasized that the FTC will “continue to hold companies accountable for their privacy commitments, including privacy promises made under the Privacy Shield,” which the FTC has also noted on its website. Additionally, Simons urged Congress to enact federal privacy and data security legislation, that would be enforced by the FTC and give the agency, among other things, the “ability to seek civil penalties” and “targeted [Administrative Procedures Act] rulemaking authority to ensure that the law keeps pace with changes and technology in the market.” Moreover, Commissioner Wilson agreed with a senator’s proposition that the enactment of a preemptive federal privacy framework would make “achieving a future adequacy determination by the E.U. easier.”

    Privacy/Cyber Risk & Data Security FTC Courts GDPR European Union EU-US Privacy Shield

    Share page with AddThis
  • FCC provides safe harbors for blocking illegal robocalls

    Privacy, Cyber Risk & Data Security

    On July 16, the FCC issued an order adopting rules to further encourage phone companies to block illegal and unwanted robocalls and to continue the Commission’s implementation of the TRACED Act (covered by InfoBytes here). The rule establishes two safe harbors from liability for the unintended or inadvertent blocking of wanted calls: (i) voice service providers will not be held liable under the Communications Act and FCC rules on terminating voice service providers that block calls, provided “reasonable analytics,” such as caller ID authentication information, are used to identify and block illegal or unwanted calls; and (ii) voice service providers will not be held liable for blocking calls from “bad-actor upstream voice service providers that continue to allow unwanted calls to traverse their networks.” The FCC’s order also includes a Further Notice of Proposed Rulemaking seeking comments on, among other things, “whether to obligate originating and intermediate providers to better police their networks against illegal calls,” whether the “reasonable analytics” safe harbor should be expanded “to include network-based blocking without consumer opt-out,” and whether the Commission should adopt more extensive redress requirements, and require terminating providers to provide consumers information about blocked calls.

    Privacy/Cyber Risk & Data Security FCC Robocalls TRACED Act

    Share page with AddThis
  • Court of Justice of the European Union invalidates EU-U.S. Privacy Shield; standard contractual clauses survive (for now)

    Privacy, Cyber Risk & Data Security

    On July 16, 2020, the Court of Justice of the European Union (CJEU) issued its opinion in the Schrems II case (Case C-311/18). In its opinion, the CJEU concluded that the Standard Contractual Clauses issued by the European Commission for the transfer of personal data to data processors established outside of the EU are valid. However, the Court invalidated the EU-U.S. Privacy Shield. The ruling cannot be appealed.

    Background

    In 2015, a privacy campaigner named Max Schrems filed a complaint with Ireland’s Data Protection Commissioner challenging a global social media company’s use of data transfers from servers in Ireland to servicers in the U.S. Schrems argued that U.S. laws did not offer sufficient protection of EU customer data, that EU customer data might be at risk of being accessed and processed by the U.S. government once transferred, and that there was no remedy available to EU individuals to ensure protection of their personal data after transfer to the U.S. Schrems sought the suspension or prohibition of future data transfers, which were executed by the company through standard data protection contractual clauses (a method approved by the Court in 2010 by Decision 2010/87). The social media company had utilized these standard contractual clauses after the CJEU invalidated the U.S. – EU Safe Harbor Framework in 2015.

    Following the complaint, Ireland’s Data Protection Commissioner brought proceedings against the social media company in the Irish High Court, which referred numerous questions to the CJEU for a preliminary ruling, including questions addressing the validity of the standard contractual clauses and the EU-U.S. Privacy Shield.

    CJEU Opinion – Standard Contractual Clauses (Decision 2010/87)

    Upon review of the recommendations from the CJEU’s Advocate General published on December 19, 2019, the CJEU found the Decision approving the use of contractual clauses to transfer personal data valid.

    The CJEU noted that the GDPR applies to the transfer of personal data for commercial purposes by a company operating in an EU member state to another company outside of the EU, notwithstanding the third-party country’s processing of the data under its own security laws. Moreover, the CJEU explained that data protection contractual clauses between an EU company and a company operating in a third-party country must afford a level of protection “essentially equivalent to that which is guaranteed within the European Union” under the GDPR. According to the CJEU, the level of protection must take into consideration not only the contractual clauses executed by the companies, but the “relevant aspects of the legal system of that third country.”

    As for the Decision 2010/87, the CJEU determined that it provides effective mechanisms to, in practice, ensure contractual clauses governing data transfers are in compliance with the level of protection requirement by the GDPR, and appropriately requires the suspension or prohibition of transfers in the event the clauses are breached or unable to be honored. The CJEU specifically highlighted the certification required by the EU data exporter and the third-party country recipient to verify, prior to any transfer, (i) the level of data protection in the third-party country prior to any transfer; and (ii) abilities to comply with the data protection clauses.

    CJEU Opinion - EU-U.S. Privacy Shield, (Decision 2016/1250)

    The CJEU decided to examine and rule on the validity of the EU – U.S. Privacy Shield. The CJEU determined that because the requirements of U.S. national security, public interest and law enforcement have “primacy” over the data protection principles of the EU-U.S. Privacy Shield, the data transferred under the EU-U.S. Privacy Shield would not be subject to the same level of protections prescribed by the GDPR. Specifically, the CJEU held that the surveillance programs used by U.S. authorities are not proportionally equivalent to those allowed under the EU law because they are not “limited to what is strictly necessary,” nor, under certain surveillance programs, does the U.S. “grant data subjects actionable rights before the courts against the U.S. authorities.” Moreover, the CJEU rejected the argument that the Ombudsperson mechanism satisfies the GDPR’s right to judicial protection, stating that it “does not provide any cause of action before a body which offers the persons whose data is transferred to the United States guarantees essentially equivalent to those required by [the GDPR],” and the Ombudsperson “cannot be regarded as a tribunal.” Thus, on those grounds, the CJEU declared the EU-U.S. Privacy Shield invalid.

    Privacy/Cyber Risk & Data Security GDPR European Union Of Interest to Non-US Persons EU-US Privacy Shield

    Share page with AddThis

Pages