Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • EU Commission, Council, and Parliament agree on details of AI Act

    Privacy, Cyber Risk & Data Security

    On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act).  The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.

    Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards.  Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).

    Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.

    Privacy, Cyber Risk & Data Security Privacy European Union Artificial Intelligence Privacy/Cyber Risk & Data Security Of Interest to Non-US Persons

  • CFPB advisory stresses “permissible purpose” of consumer reports

    Agency Rule-Making & Guidance

    On July 7, the CFPB issued an advisory opinion to state its interpretation that under certain FCRA-permissible purpose provisions, a consumer reporting agency may not provide a consumer report to a user unless it has reason to believe that all of the information it includes pertains to the consumer who is the subject of the user’s request. The Bureau explained that “credit reporting companies and users of credit reports have specific obligations to protect the public’s data privacy,” and reminded covered entities that “FCRA section 604(f) strictly prohibits a person who uses or obtains a consumer report from doing so without a permissible purpose.”

    Among other things, the FCRA is designed to ensure fair and accurate reporting and requires users who buy these consumer credit reports to have a legally permissible purpose. Specifically, the advisory opinion clarifies that (i) insufficient matching procedures can result in credit reporting companies providing reports to entities without a permissible purpose, thus violating a consumer’s privacy rights (the Bureau explained that if a credit reporting company uses name-only matching procedures, items appearing on a credit report may not all correspond to a single individual); (ii) it is unlawful to provide credit reports of multiple people as “possible matches” (credit reporting companies are obligated to implement adequate procedures to find the correct individual); (iii) disclaimers about insufficient matching procedures will not cure a failure to take reasonable measures to ensure the information provided in a credit report is only about the individual for whom the user has a permissible purpose; and (iv) credit report users must ensure that they are not violating an individual’s privacy by obtaining a credit report when they lack a permissible purpose for doing so.

    The Bureau also outlined certain criminal liability provisions in the FCRA. According to the advisory opinion, covered entities may face criminal liability under Section 619 for obtaining information on an individual under false pretenses, while Section 620 “imposes criminal liability on any officer or employee of a consumer reporting agency who knowingly and willfully provides information concerning an individual from the agency’s files to an unauthorized person.” Violators can face criminal penalties and imprisonment, the Bureau said in its announcement.

    As previously covered by InfoBytes, the Bureau finalized its Advisory Opinions Policy in 2020. Under the policy, entities seeking to comply with existing regulatory requirements are permitted to request an advisory opinion in the form of an interpretive rule from the Bureau (published in the Federal Register for increased transparency) to address areas of uncertainty.

    Agency Rule-Making & Guidance Federal Issues CFPB Advisory Opinion FCRA Consumer Reporting Agency Consumer Finance Privacy/Cyber Risk & Data Security Consumer Protection Consumer Reporting

  • District Court approves contact tracing suit settlement

    Courts

    On October 31, the U.S. District Court for the Northern District of California granted plaintiffs’ motion for attorneys' fees, expenses, and service awards related to a class action settlement alleging that an internet platform (defendant) violated the California Confidentiality of Medical Information Act, as well as other state laws through its “contact tracing” system that operated on consumers’ mobile devices. According to the motion, the defendant co-designed a digital contact tracing system to combat the spread of COVID-19 on mobile devices using the defendant’s mobile device operating system. The plaintiffs alleged that the defendant unlawfully exposed confidential medical information and personally identifying information through this system. Furthermore, the plaintiffs alleged that the defendant's system was "fundamentally flawed in its design and implementation" because it left users’ private medical and personally identifying information unprotected on mobile device “system logs” to which the defendant and third parties had routine access. Under the terms of the settlement, class counsel will receive approximately $1.95 million in attorneys’ fees and $56,457.44 in expenses. Additionally, the defendant must pay service awards to class representatives.

    Courts Privacy/Cyber Risk & Data Security Covid-19 Class Action Settlement

  • District Court preliminarily approves $3.7 million data breach settlement

    Privacy, Cyber Risk & Data Security

    On June 30, the U.S. District Court for the Central District of California preliminarily approved an approximately $3.7 million consolidated class action settlement resolving claims arising from a defendant restaurant chain’s 2021 data breach. According to class members’ memorandum in support of their motion for preliminary approval of the settlement, the data breach exposed current and former employees’ personal identifying information (PII), including names and Social Security numbers. Following an investigation, the defendant sent notices to roughly 103,767 individuals whose PII may have been subject to unauthorized access and offered impacted individuals one year of free credit and identity monitoring services. Putative class actions were filed claiming the defendant failed to adequately safeguard its current and former employees’ (and their family members’) electronically stored PII, and alleging, among other things, violations of California’s Unfair Competition Law, Customer Records Act, and Consumer Privacy Act. If the settlement is granted final approval, each class member will be eligible to make a claim for up to $1,000 in reimbursements for expenses and lost time, and up to $5,000 in reimbursements for extraordinary expenses for identity theft related to the data breach. California settlement subclass members will also be entitled to $100 as a statutory damages award. Additionally, all class members will be eligible to enroll in two-years of three-bureau credit monitoring. The defendant may also be responsible for attorneys’ fees, costs, and service awards.

    Privacy/Cyber Risk & Data Security Courts State Issues Class Action Data Breach California Settlement

  • New York fines supermarket chain $400,000 for mishandled consumer data

    Privacy, Cyber Risk & Data Security

    On June 30, the New York attorney general announced a settlement with a New York-based supermarket chain (respondent) for allegedly leaving more than three million customers’ personal information in unsecured, misconfigured cloud storage containers, which made the data potentially easy to access. The compromised data included customer account usernames and passwords, as well as customer names, email addresses, mailing addresses, and additional data derived from drivers’ license numbers. According to the assurance of discontinuance, a security researcher informed the respondent in 2021 that one of the cloud storage containers was misconfigured from its creation in January 2018 until April 2021, potentially exposing customers’ personal information. A second misconfigured container was identified in May 2021 that had been publicly accessible since November 2018, the AG said, noting that the respondent “immediately reviewed its cloud environment and identified the container, which had a database backup file with over three million records of customer email addresses and account passwords.” The AG asserted that the respondent also “failed to inventory its cloud assets containing personal information, secure all user passwords, and regularly conduct security testing of its cloud assets.” Nor did the retailer maintain long-term logs of its cloud assets, thus making it difficult to security incidents, the AG said.

    The terms of the settlement require the respondent to pay $400,000 in penalties to the state. The respondent has also agreed to (i) maintain a comprehensive information security program, including reporting security risks to the company's leadership; (ii) establish practices and policies to maintain an inventory of all cloud assets and to ensure all cloud assets containing personal information have appropriate measures to limit access; (iii) develop a penetration testing program and implement centralized logging and monitoring of cloud asset activity; (iv) establish appropriate password policies and procedures for customer accounts; (v) maintain a reasonable vulnerability disclosure program to enable third parties to disclose vulnerabilities; (vi) establish appropriate practices for customer account management and authentication; and (vii) update its data collection and retention practices to ensure it only collects customers’ personal information when there is a reasonable business purpose for the collection and permanently deletes all personal information collected before the agreement for which no reasonable purpose exists.

    Privacy/Cyber Risk & Data Security State Issues New York Settlement State Attorney General Consumer Protection

  • U.S.-UK partnership exchanges views on crypto, digital assets

    Federal Issues

    On July 1, the U.S. Treasury Department issued a joint statement providing an overview of recent meetings of the U.S.-UK Financial Innovation Partnership (FIP) where Regulatory and Commercial Pillar participants exchanged views “on topics of mutual interest in the U.S. and UK regarding crypto and digital asset ecosystems.” Participants also discussed options for deepening ties between U.S. and UK financial authorities on financial innovation. As previously covered by InfoBytes, the FIP was created in 2019 as a way to expand bilateral financial services collaborative efforts, study emerging fintech innovation trends, and share information and expertise on regulatory practices. The first meeting of the FIP took place in August 2020 (covered by InfoBytes here). Topics discussed in the most recent meeting included, among other things, crypto-asset regulation and market developments, including recent developments related to stablecoins and the exploration of central bank digital currencies, and other recent market developments on digital assets. Participants acknowledged “the continued importance of the ongoing partnership on global financial innovation as an integral component of U.S.-UK financial services cooperation.”

    Federal Issues Digital Assets Department of Treasury Fintech UK Of Interest to Non-US Persons Cryptocurrency Privacy/Cyber Risk & Data Security CBDC

  • Insurers consider biometric exclusions as privacy cases increase

    Privacy, Cyber Risk & Data Security

    According to sources, some insurers are considering adding biometric exclusions to their insurance policies as privacy lawsuits increase. An article on the recent evolution of biometric privacy lawsuits noted an apparent increase in class actions claiming violations of the Illinois Biometric Information Privacy Act (BIPA), as “more courts began ruling that individuals need not show actual injury to allege BIPA violations.” The article explained that insurance carriers now “argue that general liability policies, with their lower premiums and face values, don’t insure data privacy lawsuits and can’t support potentially huge BIPA class action awards and settlements.” This issue is poised to become increasingly important to carriers and policyholders as additional states seek to regulate biometric privacy. The article noted that in the first quarter of 2022, seven states (California, Kentucky, Maine, Maryland, Massachusetts, Missouri, and New York) introduced biometric laws generally based on Illinois’ BIPA. Texas and Washington also have biometric laws, but without a private right of action.

    Privacy/Cyber Risk & Data Security Insurance BIPA State Issues Courts Biometric Data

  • District Court says Massachusetts law will apply in choice-of-law privacy dispute

    Privacy, Cyber Risk & Data Security

    On June 28, the U.S. District Court for the District of South Carolina ruled that it will apply Massachusetts law to negligence claims in a putative class action concerning a cloud-based services provider’s allegedly lax data-security practices. The plaintiffs claimed that the defendant’s “security program was inadequate and that the security risks associated with the Personal Information went unmitigated, allowing [] cybercriminals to gain access.” During discovery, the defendant (headquartered in South Carolina) stated that its U.S. data centers are located in Massachusetts, Texas, California, and New Jersey, and that the particular servers that housed the plaintiffs’ data (and were the initial entry point for the ransomware attack) are physically located in Massachusetts. While both parties stipulated to the application of South Carolina choice-of-law principles generally, the plaintiffs specifically requested that South Carolina law be applied to their common law claims of negligence, negligence per se, and invasion of privacy since it was the state where defendant executives made the cybersecurity-related decisions that allegedly allowed the data breach to occur. However, the defendant countered that the law of each state where a plaintiff resides should apply to that specific plaintiff’s common law tort claims because the “damages were felt in their respective home states.” Both parties presented an alternative argument that if the court found the primary choice-of-law theory to be unfounded, then Massachusetts law would be appropriate as “Massachusetts was the state where the last act necessary took place because that is where the data servers were housed.”

    In determining which state’s common-law principles apply, the court stated that even if some of the cybersecurity decisions were made in South Carolina, the personal information was stored on servers in Massachusetts. Moreover, the “alleged decisions made in South Carolina may have contributed to the breach, but they were not the last act necessary to establish the cause of action,” the court wrote, noting that in order for the defendant to be potentially liable, the data servers would need to be breached. The court further concluded that “South Carolina’s choice of law rules dictate that where an injury occurs, not where the result of the injury is felt or discovered is the proper standard to determine the last act necessary to complete the tort.” As such, the court stated that Massachusetts law will apply as that is where the data breach occurred.

    Privacy/Cyber Risk & Data Security Courts State Issues Massachusetts South Carolina Class Action

  • NYDFS imposes $5 million fine against cruise line for cybersecurity violations

    Privacy, Cyber Risk & Data Security

    On June 24, NYDFS announced a consent order imposing a $5 million fine against a group of Florida-based cruise lines for alleged violations of the state’s Cybersecurity Regulation (23 NYCRR Part 500). According to a Department investigation, the companies were subject to four cybersecurity incidents between 2019 and 2021 (including two ransomware attacks). The companies determined that unauthorized parties gained access to employee email accounts, and that, through a series of phishing emails, the parties were able to access email and attachments containing personal information belonging to the companies’ consumers and employees. NYDFS claimed that although the companies were aware of the first cybersecurity event in May 2019, they failed to notify the Department as required under 23 NYCRR Part 500 until April 2020. The investigation further showed that the companies allegedly failed to implement multi-factor authentication and did not provide adequate cybersecurity training for their personnel. NYDFS determined that in addition to the penalty, since the companies were licensed insurance producers in the state at the time of the cybersecurity incidents they would be required to surrender their insurance provider licenses.

    The settlement follows a $1.25 million data breach settlement reached with 45 states and the District of Columbia on June 22 (covered by InfoBytes here).

    Privacy/Cyber Risk & Data Security State Issues NYDFS State Regulators Enforcement Settlement Data Breach 23 NYCRR Part 500

  • OCC reports on key risks facing the federal banking system

    On June 23, the OCC released its Semiannual Risk Perspective for Spring 2022, which reports on key risks threatening the safety and soundness of national banks, federal savings associations, and federal branches and agencies. The OCC reported that as “banks continue to navigate the operational- and market-related impacts of the pandemic along with substantial government stimulus, current geopolitics have tightened financial conditions and increased downside risk to economic growth.” However, the OCC noted that banks’ financial conditions remain strong and that banks are well-positioned to “deal with the economic headwinds arising from geopolitical events, higher interest rates and increased inflation.”

    The OCC highlighted operational, compliance, interest rate, and credit risks as key risk themes in the report. Observations include: (i) operational risk, including evolving cyber risk, is elevated, with an observed increase in attacks on the financial services industry given current geopolitical tensions; (ii) compliance risk remains heightened as banks navigate the current operational environment, regulatory changes, and policy initiatives; and (iii) credit risk remains moderate, with banks facing certain areas of weakness and potential longer-term implications resulting from the Covid-19 pandemic, inflation, and direct and indirect effects of the war in Ukraine. Staffing challenges among banks also present risks, with challenges posed by “strong competition” in the labor market.

    The report also discussed the importance of appropriate due diligence of new digital asset products and services. The OCC said that it “continues to engage on an interagency basis to analyze various crypto-asset use cases,” and is looking to “provide further clarity on legal permissibility, as well as safety and soundness and compliance considerations related to crypto-assets” in the banking industry. 

    The OCC further stated it “will continue to monitor the development of climate-related financial risk management frameworks at large banks,” and reported that “OCC large-bank examination teams will integrate the examination of climate-related financial risk into supervision strategies and continue to engage with bank management to better understand the challenges banks face in this effort, including identifying and collecting appropriate data and developing scenario analysis capabilities and techniques.”

    Bank Regulatory Federal Issues OCC Risk Management Third-Party Risk Management Compliance Privacy/Cyber Risk & Data Security Operational Risk Climate-Related Financial Risks Digital Assets Nonbank

Pages

Upcoming Events