Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • Treasury reports on risks to financial firms adopting cloud services

    Federal Issues

    On February 8, the U.S. Treasury Department launched the interagency Cloud Services Steering Committee in an effort to improve regulatory and private sector cooperation and develop best practices for cloud-adoption frameworks and contracts. As part of the announcement, Treasury released a first-of-its-kind report discussing potential benefits and challenges associated with the adoption of cloud services technology by financial services firms. While recognizing that cloud-based technologies can improves access and reliability for local communities and help community banks compete with financial technology firms, Treasury found that financial services firms that rely on these technologies need more visibility, staff support, and cybersecurity incident response engagement from cloud service providers (CSPs).

    The report identified several significant challenges resulting from the use of cloud-based technologies in the financial sector. These include: (i) insufficient transparency to support due diligence and monitoring by financial institutions (financial institutions must fully understand the risks associated with cloud services in order to implement appropriate protections for consumers); (ii) gaps in human capital and tools to securely deploy cloud services (CSPs should engage experts and improve tools and frameworks to ensure financial institutions are able to implement resilient, secure platforms for customers); (iii) exposure to potential operational incidents (financial institutions have expressed concerns that cyber vulnerabilities originating at a CSP could have a cascading impact); (iv) potential impact of market concentration in cloud service offerings on the financial sector’s resilience (the current market relies on a small number of CSPs that likely exists across banking, securities, and insurance markets); (v) dynamics in contract negotiations given market concentration (the small number of CSPs could affect financial institutions’ bargaining power); and (vi) international landscape and regulatory fragmentation (regulatory conflicts could result from the patchwork of global regulatory and supervisory approaches to cloud technology).

    The report, which received extensive input from U.S. regulators, private sector stakeholders, trade associations, and think tanks, does not impose any requirements, nor does it endorse or discourage firms from using a specific provider or cloud service. It does, however, recommend that Treasury and the broader financial regulatory community further evaluate the financial risks associated with having a limited number of CSPs offer cloud services.

    Federal Issues Department of Treasury Privacy, Cyber Risk & Data Security Cloud Technology Risk Management

  • FTC bans health vendor from sharing consumer info with advertiser

    Federal Issues

    On February 1, the DOJ filed a complaint on behalf of the FTC against a telehealth and prescription drug discount provider for allegedly violating the FTC Act and the Health Breach Notification Rule by failing to notify consumers that it was disclosing their personal health information to third parties for advertising purposes. As a vendor of personal health records, the FTC stated that the company is required to comply with the Health Breach Notification Rule, which imposes certain reporting obligations on health apps and other companies that collect or use consumers’ health information (previously covered by InfoBytes here).

    According to the complaint filed in the U.S. District Court for the Northern District of California, the company—which allows users to keep track of their personal health information, including saving, tracking, and receiving prescription alerts—shared sensitive personal health information with advertisers and other third parties for years, even though it allegedly promised users that their health information would never be shared. The FTC maintained that the company also monetized users’ personal health information and used certain shared data to target its own users with personalized health- and medication-specific advertisement on various social media platforms. The company also allegedly: (i) permitted third parties to use shared data for their own internal purposes; (ii) falsely claimed compliance with the Digital Advertising Alliance principles (which requires companies to obtain consent prior to using health information for advertising purposes); (iii) misrepresented its HIPAA compliance; (iv) failed to maintain sufficient formal, written, or standard privacy or data sharing policies or procedures to protect personal health information; and (v) failed to report the unauthorized disclosures.

    Under the terms of the proposed court order filed by the DOJ, the company would be required to pay a $1.5 million civil penalty, and would be prohibited from engaging in the identified alleged deceptive practices and from sharing personal health information with third parties for advertising purposes. The company would also be required to implement several measures to address the identified violations, including obtaining users’ affirmative consent before disclosing information to third parties (the company would be prohibited from using “dark patterns,” or manipulative designs, to obtain consent), directing third parties to delete shared data, notifying users about the breaches and the FTC’s enforcement action, implementing a data retention schedule, and putting in place a comprehensive privacy program to safeguard consumer data.

    Federal Issues FTC Enforcement Privacy, Cyber Risk & Data Security Advertisement Consumer Protection FTC Act Health Breach Notification Rule Dark Patterns

  • Illinois Supreme Court sets five-year SOL for section 15 BIPA violations

    Privacy, Cyber Risk & Data Security

    On February 2, the Illinois Supreme Court held that under the state’s Biometric Information Privacy Act (BIPA), individuals have five years to assert violations of section 15 of the statute. The plaintiff sued his former employer claiming that by scanning his fingerprints, the company violated section 15(a) of BIPA (which provides for the retention and deletion of biometric data), as well as sections 15(b) and 15(d) (which provide for the consensual collection and disclosure of biometric identifiers and biometric information). According to the plaintiff, the defendant allegedly failed to implement and adhere to a publicly available biometric information retention and destruction policy, failed to obtain his consent to collection his biometric data, and disclosed his data to third parties without his consent. The defendant moved to dismiss the complaint as untimely, arguing that “claims brought under [BIPA] concern violations of privacy, and therefore, the one-year limitations period in section 13-201 of the [Code of Civil Procedure (Code)] should apply to such claims under [BIPA] because section 13-201 governs actions for the ‘publication of matter violating the right of privacy.’”

    The circuit court disagreed, stating that the lawsuit was timely filed because the five-year limitations period codified in section 13-205 of the Code applied to violations of BIPA. While the circuit court agreed that BIPA is a privacy statute, it said section 13-201 of the Code applies to privacy claims where “publication” is an element of the complaint. Because the plaintiff’s complaint does not involve the publication of biometric data and does not assert invasions of privacy or defamation, the one-year limitations period should not apply, the circuit court said, further adding that BIPA is not intended “to regulate the publication of biometric data.” The circuit court also concluded that the five-year limitations period applied in this case because BIPA itself does not contain a limitations period.

    The defendant amended his complaint and eventually appealed. The appellate court ultimately concluded that the one-year limitations period codified in section 13-201 of the Code applies to claims under section 15(c) and 15(d) of BIPA “where ‘publication or disclosure of biometric data is clearly an element’ of the claim,” and that the five-year limitations period codified in section 13-205 of the Code governs actions brought under section 15(a), 15(b), and 15(e) (which provides data safeguarding requirements) of BIPA “because ‘no element of publication or dissemination’ exists in those claims.” The defendant continued to argue that BIPA is a privacy statute and as such, claims brought under section 15 of BIPA should be governed by the one-year limitations period codified in section 13-201 of the Code.

    In affirming in part and reversing in part the judgment of the appellate court, the Illinois Supreme Court applied the state’s “five-year catchall limitations period” to claims brought under BIPA. “[A]pplying two different time limitations periods or time-bar standards to different subsections of section 15 of [BIPA] would create an unclear, inconvenient, inconsistent, and potentially unworkable regime as it pertains to the administration of justice for claims under [BIPA],” the Illinois Supreme Court wrote.

    Privacy, Cyber Risk & Data Security Courts Illinois BIPA Statute of Limitations Class Action

  • FTC finalizes data-security order with ed tech provider

    Federal Issues

    On January 27, the FTC finalized an order with an education technology (ed tech) provider which claimed that the provider’s lax data security practices led to the exposure of millions of users and employees’ sensitive information, including Social Security numbers, email addresses, and passwords. As previously covered by InfoBytes, due to the company’s alleged failure to adequately protect the personal information collected from its users and employees, the company experienced four data breaches beginning in September 2017, when a phishing attack granted a hacker access to employees’ direct deposit information. Claiming violations of Section 5(a) of the FTC Act, the FTC alleged the company failed to implement basic security measures, stored personal data insecurely, and failed to implement a written security policy until January 2021, despite experiencing three phishing attacks.

    Under the terms of the final decision and order, the company (who neither admitted nor denied any of the allegations) is required to take several measures to address the alleged conduct, including: (i) implementing a data retention and deletion process, which will allow users to request access to and deletion of their data; (ii) providing multi-factor authentication methods for users to secure their accounts; (iii) providing notice to affected individuals; (iv) implementing a comprehensive information security program; and (v) obtaining initial and biennial third-party information security assessments. The company must also submit covered incident reports to the FTC and is prohibited from making any misrepresentations relating to how it collects, maintains, uses, deletes, permits, or denies access to individuals’ covered information.

    Federal Issues FTC Enforcement Privacy, Cyber Risk & Data Security Data Breach FTC Act

  • NIST releases new AI framework to help organizations mitigate risk

    Privacy, Cyber Risk & Data Security

    On January 26, the National Institute of Standards and Technology (NIST) released voluntary guidance to help organizations that design, deploy, or use artificial intelligence (AI) systems mitigate risk. The Artificial Intelligence Risk Management Framework (developed in close collaboration with the private and public sectors pursuant to a Congressional directive under the National Defense Authorization for Fiscal Year 2021), “provides a flexible, structured and measurable process that will enable organizations to address AI risks,” NIST explained. The framework breaks down the process into four high-level functions: govern, map, measure, and manage. These categories, among other things, (i) provide guidance on how to evaluate AI for legal and regulatory compliance and ensure policies, processes, procedures and practices are transparent, robust, and effective; (ii) outline processes for addressing AI risks and benefits arising from third-party software and data; (iii) describe the mapping process for collecting information to establish the context to frame AI-related risks; (iv) provide guidance for employing and measuring “quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts”; and (v) set forth a proposed process for managing and allocating risk management resources. Examples are also provided within the framework to help organizations implement the guidance.

    “This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” Deputy Commerce Secretary Don Graves said in the announcement. “It should accelerate AI innovation and growth while advancing—rather than restricting or damaging—civil rights, civil liberties and equity for all.” 

    Privacy, Cyber Risk & Data Security NIST Artificial Intelligence Risk Management

  • California investigating mobile apps’ CCPA compliance

    Privacy, Cyber Risk & Data Security

    On January 27, the California attorney general announced an investigation into mobile applications’ compliance with the California Consumer Privacy Act (CCPA). The AG sent letters to businesses in the retail, travel, and food service industries who maintain popular mobile apps that allegedly fail to comply with consumer opt-out requests or do not offer mechanisms for consumers to delete personal information or stop the sale of their data. The investigation also focuses on businesses that fail to process consumer opt-out and data-deletion requests submitted through an authorized agent, as required under the CCPA. “On this Data Privacy Day and every day, businesses must honor Californians’ right to opt out and delete personal information, including when those requests are made through an authorized agent,” the AG said, adding that authorized agent requests include “those sent by Permission Slip, a mobile application developed by Consumer Reports that allows consumers to send requests to opt out and delete their personal information.” The AG encouraged the tech industry to develop and adopt user-enabled global privacy controls for mobile operating systems to enable consumers to stop apps from selling their data.

    As previously covered by InfoBytes, the CCPA was enacted in 2018 and took effect January 1, 2020. The California Privacy Protection Agency is currently working on draft regulations to implement the California Privacy Rights Act, which largely became effective January 1, to amend and build upon the CCPA. (Covered by InfoBytes here.)

    Privacy, Cyber Risk & Data Security State Issues State Attorney General California CCPA Compliance Opt-Out Consumer Protection CPRA

  • U.S. messaging service fined €5.5 million for GDPR violations

    Privacy, Cyber Risk & Data Security

    On January 19, the Irish Data Protection Commission (DPC) announced the conclusion of an inquiry into the data processing practices of a U.S.-based messaging service’s Ireland operations and fined the messaging service €5.5 million. The investigation was part of a broader GDPR compliance inquiry prompted by a May 25, 2018 complaint from a German data subject.

    The DPC noted that in advance of the date on which the GDPR became effective (May 25, 2018), the U.S. company updated its terms of service and notified users that, to continue accessing the messaging service, they would need to accept the updated terms by clicking “agree and continue.” The complainant asserted that, in doing so, the messaging service forced users to consent to the processing of their personal data for service improvement and security. 

    The company claimed that when a user accepted the updated terms of service, the user entered into a contract with the company. The company therefore maintained that “the processing of users’ data in connection with the delivery of its service was necessary for the performance of that contract, to include the provision of service improvement and security features, so that such processing operations were lawful by reference to Article 6(1)(b) of the GDPR (the ‘contract’ legal basis for processing).” The complainant argued that, contrary to the company’s stated intention, the company was “seeking to rely on consent to provide a lawful basis for its processing of users’ data.”

    The DPC issued a draft decision that was submitted to its EU peer regulators (Concerned Supervisory Authorities or “CSAs”). The DPC concluded that the company was in breach of its GDPR transparency obligations under Articles 12 and 13(1)(c), and stated that users had “insufficient clarity as to what processing operations were being carried out on their personal data.” With respect to whether the company was obliged to rely on consent as its legal basis in connection with the delivery of the service (including for service improvement and security purposes), the DPC disagreed with the complainant’s “forced consent” argument, finding that the company was not required to rely on user consent as providing a lawful basis for its processing of their personal data.

    Noting that DPC had previously imposed a €225 million fine against the company last September for breaching its transparency obligations to users about how their information was being disclosed over the same time period (covered by InfoBytes here), the DPC did not propose an additional fine. Six of the 47 CSAs, however, objected to the DPC’s conclusion as to the “forced consent” aspect of its decision, arguing that the company “should not be permitted to rely on the contract legal basis on the basis that the delivery of service improvement and security could not be said to be necessary to perform the core elements of what was said to be a much more limited form of contract.”

    The dispute was referred to the European Data Protection Board (EDPB), which issued a final decision on January 12, where it found that, “as a matter of principle, [the company] was not entitled to rely on the contract legal basis as providing a lawful basis for its processing of personal data for the purposes of service improvement and security,” and that in doing so, the company contravened Article 6(1) of the GDPR.

    The DPC handed down a €5.5 million administrative fine and ordered the company to bring its processing operations into compliance with the GDPR within a six-month period. Separately, the EDPB instructed the DPC “to conduct a fresh investigation” that would span all of the company’s processing operations to determine whether the company is in compliance with relevant GDPR obligations regarding the processing of personal data for behavioral advertising, marketing purposes, the provisions of metrics to third parties, and the exchange of data with affiliated companies for the purpose of service improvements.

    The DPC challenged the EDPB’s decision, stating that the board “does not have a general supervision role akin to national courts in respect of national independent authorities, and it is not open to the EDPB to instruct and direct an authority to engage in open-ended and speculative investigation.” The DPC suggested that it is considering bringing an action before the Court of Justice of the European Union to “seek the setting aside of the EDPB’s direction.”

    Privacy, Cyber Risk & Data Security Of Interest to Non-US Persons Ireland Enforcement Consumer Protection EU GDPR

  • 9th Circuit reverses decision in COPPA suit

    Courts

    In December, the U.S. Court of Appeals for the Ninth Circuit reversed and remanded a district court’s decision to dismiss a suit alleging that a multinational technology company used persistent identifiers to collect children’s data and track their online behavior surreptitiously and without their consent in violation of the Children’s Online Privacy Protection Act (COPPA). According to the opinion, the company used targeted advertising “aided by sophisticated technology that delivers curated, customized advertising based on information about specific users.” The opinion further explained that “the company’s technology ‘depends partly on what [FTC] regulations call ‘persistent identifiers,’ which is information ‘that can be used to recognize a user over time and across different Web sites or online services.’” The opinion also noted that in 2013, the FTC adopted regulations under COPPA that barred the collection of children’s “persistent identifiers” without parental consent. The plaintiff class claimed that the company used persistent identifiers to collect data and track their online behavior surreptitiously and without their consent, and alleged state law claims arising under the constitutional, statutory, and common law of California, Colorado, Indiana, Massachusetts, New Jersey, and Tennessee, in addition to COPPA violations. The district court ruled that the “core allegations” in the third amended complaint were squarely covered, and preempted, by COPPA.

    On appeal, the 9th Circuit considered whether COPPA preempts state law claims based on underlying conduct that also violates COPPA’s regulations. To determine this, the appellate court examined the language of COPPA’s preemption clause, which states that state and local governments cannot impose liability for interstate commercial activities that is “inconsistent with the treatment of those activities or actions” under COPPA. The opinion noted that the 9th Circuit has long held “that a state law damages remedy for conduct already proscribed by federal regulations is not preempted,” and that the statutory term “inconsistent” in the preemption context refers to contradictory state law requirements, or to requirements that stand as obstacles to federal objectives. The appellate court stated that it was not “persuaded that the insertion of ‘treatment’ in the preemption clause here evinces clear congressional intent to create an exclusive remedial scheme for enforcement of COPPA requirements.” The opinion noted that because “the bar on ‘inconsistent’ state laws implicitly preserves ‘consistent’ state substantive laws, it would be nonsensical to assume Congress intended to simultaneously preclude all state remedies for violations of those laws.” As such, the appellate court held that “COPPA’s preemption clause does not bar state-law causes of action that are parallel to, or proscribe the same conduct forbidden by, COPPA. Express preemption therefore does not apply to the children’s claims.”

    Courts Appellate Ninth Circuit COPPA Privacy, Cyber Risk & Data Security FTC State Issues

  • FTC finalizes data breach order with online alcohol marketplace

    Federal Issues

    On January 10, the FTC announced it has finalized an order with a company that operates an online alcohol marketplace, along with its CEO, related to a data breach that allegedly exposed the personal information of roughly 2.5 million consumers. As previously covered by InfoBytes, the FTC alleged the respondents were alerted to problems with the company’s data security procedures following an earlier security incident in 2018, which involved hackers accessing company servers to mine cryptocurrency until the company changed its cloud computing account login information. The FTC asserted, however, that the company failed to take appropriate measures to address its security problems even though it publicly claimed it had appropriate security protections in place. Among other things, the respondents allegedly violated the FTC Act by (i) failing to implement basic security measures or put in place reasonable safeguards to secure the personal information it collected and stored; (ii) storing critical database information, including login credentials, on an unsecured platform; (iii) failing to monitor its network for security threats or unauthorized attempts to access or remove personal data; and (iv) exposing customers to hackers, identity thieves, and malicious actors who use personal information to open fraudulent lines of credit or commit other fraud. The respondents neither admit nor deny the allegations.

    The terms of the final decision and order prohibit the company from making any misrepresentations in connection with any offered product or service related to how it collects, uses, discloses, maintains, deletes, or permits or denies access to personal information. Additionally, the company is required to destroy any collected personal data that is not necessary for providing products or services to consumers, and must refrain from collecting or maintaining personal information unless it is necessary for specific purposes provided in a data retention schedule. The company must also implement and maintain a comprehensive information security program, establish security safeguards to protect against specified security incidents, obtain initial and biennial third-party information security assessments, and publicly detail on its website information on its data collection practices. The order also requires the CEO to implement an information security program at any relevant business for which he is a majority owner, CEO, or senior officer with information security responsibilities.

    Federal Issues Privacy, Cyber Risk & Data Security FTC Enforcement

  • District Court approves $11 million data breach settlement

    Privacy, Cyber Risk & Data Security

    On January 4, the U.S. District Court for the Northern District of Texas granted final approval of an $11 million class action settlement resolving allegations related to a February 2021 data breach that compromised more than 4.3 million customers’ personally identifiable information, including names, Social Security numbers, driver’s license numbers, dates of birth, and username/password information. According to plaintiffs’ amended complaint, the defendant insurance software providers failed to notify affected individuals about the data breach until on or after May 10, 2021, despite commencing an investigation in March. Plaintiffs maintained that the defendants’ alleged failure to comply with FTC cybersecurity guidelines and industry data protection standards put at risk their financial and personal records, and said they now face years of constant surveillance to prevent potential identity theft and fraud. Under the terms of the settlement (see also plaintiffs’ memorandum of law in support of the motion for final approval), class members will each receive up to $5,000 for out-of-pocket expenses, including up to eight hours of lost time at $25/hour, as well as 12 months of financial fraud protection. Members of a California subclass will receive additional benefits of between $100 and $300 each. The defendants are also responsible for paying each named plaintiff a $2,000 service award and must pay over $3 million in attorney fees, costs, and expenses.

    Privacy, Cyber Risk & Data Security Courts Settlement Data Breach State Issues Class Action California FTC

Pages

Upcoming Events