Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On January 13, the U.S. District Court for the Northern District of Virginia issued a final order and judgment in a class action settlement between a class of consumers (plaintiffs) and a large credit reporting agency (company) to resolve allegations arising from a 2017 cyberattack causing a data breach of the company. After the company announced the breach, many consumers filed suit and were eventually joined into a proposed settlement class. As previously covered by InfoBytes, the plaintiffs alleged that the company (i) failed to provide appropriate security to protect stored personal consumer information; (ii) misled consumers regarding the effectiveness and capacity of its security; and (iii) failed to take proper action when vulnerabilities in their security system became known. The company and the plaintiffs later submitted a proposed settlement order to the court.
According to the final order and judgment, the court certified the settlement class of the approximately 147 million affected consumers, finding the class was adequately represented, and approved the “distribution and allocation plan” as fair and reasonable. In the order granting final approval of the settlement the company agreed to, among other things, pay $380.5 million into a settlement fund and potentially up to $125 million more to cover “certain out-of-pocket losses,” $77.5 million for attorneys’ fees, and approximately $1.4 million for reimbursement of expenses. Class members are eligible for additional benefits including up to 10 years of credit monitoring and identity theft protection services or cash compensation if they already have those services, as well as identity restoration services for seven years. The company also agreed to spend at least $1 billion on data security and technology in the next five years.
On January 13, Washington state lawmakers announced two bills designed to strengthen consumer access and control over personal data and regulate the use of facial recognition technology. Highlights of SB 6281, the Washington Privacy Act, include the following:
- Applicability. SB 6281 will apply to legal entities that conduct business or produce products or services that are targeted to Washington consumers that also (i) control or process personal data for at least 100,000 consumers; or (ii) derive more than 50 percent of gross revenue from the sale of personal data, in addition to processing or controlling the personal data of at least 25,000 consumers. Exempt from SB 6281, among others, are state and local governments, municipal corporations, certain protected health information, personal data governed by state and federal regulations, and employment records.
- Consumer rights. Consumers will be able to exercise the following concerning their personal data: access; correction; deletion; data portability; and opt-out rights, including the right to opt out of the processing of personal data for targeted advertising and the sale of personal data.
- Controller responsibilities. Controllers required to comply with SB 6281 will be responsible for (i) transparency; (ii) limiting the collection of data to what is required and relevant for a specified purpose; (iii) ensuring data is not processed for reasons incompatible with a specified purpose; (iv) securing personal data from unauthorized access; (v) prohibiting processing that violates state or federal laws prohibiting unlawful discrimination against consumers; (vi) obtaining consumer consent in order to process sensitive data; and (vii) ensuring contracts and agreements do not contain provisions that waive or limit a consumer’s rights. Controllers must also conduct data protection assessments for all processing activities that involve personal data, and conduct additional assessments each time a processing change occurs that “materially increases the risk to consumers.”
- State attorney general. SB 6821 does not create a private right of action for individuals to sue if there is an alleged violation. However, the AG will be permitted to bring actions and impose penalties of no more than $7,500 per violation. The AG will also be required to submit a report evaluating the liability and enforcement provisions of SB 6281 by 2022 along with any recommendations for change.
- Information sharing. SB 6281 will allow the state governor to enter into agreements with British Columbia, California, and Oregon, which will allow personal data to be shared for joint research initiatives.
- Facial Recognition. SB 6281 will establish limits on the commercial use of facial recognition services. Among other things, the bill will require third-party testing on all services prior to deployment for accuracy and unfair performance, conspicuous notice when a service is deployed in a public space, and will require companies to receive consumer consent prior to enrolling an image in a service used in a public space.
The second bill, SB 6280, will more specifically govern the use of facial recognition services by state and local government agencies, and, among other things, outlines provisions for the use of facial recognition services when identifying victims of crime, stipulates restrictions concerning ongoing surveillance, and requires agencies to produce an annual report containing a compliance assessment.
As previously covered by InfoBytes, last year, New York introduced proposed legislation (see S 5642) that seeks to regulate the storage, use, disclosure, and sale of consumer personal data by entities that conduct business in New York state or produce products or services that are intentionally targeted to residents of New York state. Provisions included in the measures introduced by New York and Washington state differ from those contained in the California Consumer Privacy Act (CCPA), which took effect January 1. (Previous InfoBytes coverage on the CCPA is available here.)
Mortgage broker allegedly violated federal laws by posting customers’ personal information on website
On January 7, the FTC announced a proposed settlement with a California mortgage broker and his company to resolve alleged violations of the FTC Act, FCRA, Regulation P, and the Safeguards Rule. According to a complaint filed by the DOJ on behalf of the FTC, the defendants published the personal information of customers who posted negative reviews on a public website, including customers’ “sources of income, debt-to-income ratios, credit history, taxes, family relationships, and health.” The alleged posts containing negative financial information violated the defendants’ responsibilities under Regulation P (Privacy of Consumer Financial Information) as the required privacy disclosure provided to the customers stated that the defendants would not share personal information with any third party. Regulation P also “prohibits financial institutions from disclosing to any nonaffiliated third party any nonpublic personal information about a customer unless it has provided the customer with an opt-out notice, . . . a reasonable opportunity to opt out of the disclosure, and the customer has not opted out.” In this instance, customers were not given the opportunity to opt out of disclosure of their personal financial information in response to online consumer reviews, the complaint asserts. In addition, the complaint alleges that the defendants also violated the FTC Act by causing unfair or deceptive acts or practices that “deprived consumers of the ability to control whether and to whom they disclosed sensitive information.” The defendants also allegedly violated the FCRA by using consumer reports for impermissible purposes, and the FTC’s Safeguards Rule by failing to implement or maintain an adequate information security program. Under the terms of the proposed settlement, the defendants will pay a $120,000 civil penalty and are prohibited from (i) misrepresenting their privacy and data security practices; (ii) using consumer reports for anything other than a permissible purpose; (iii) not providing required privacy notices; and (iv) improperly disclosing nonpublic personal information to third parties. Among other things, the company is also prohibited from transferring, selling, sharing, collecting, maintaining, or storing nonpublic personal information unless it implements a comprehensive information security program; and must obtain independent third-party assessments of its information security program every two years.
On December 9, a coalition of 25 state attorneys general responded to the FTC’s request for comments on a wide range of issues related to the Children’s Online Privacy Protection Rule (COPPA). As previously covered by InfoBytes, the FTC released a notice in July seeking comments on all major provisions of COPPA, including definitions, notice and parental consent requirements, exceptions to verifiable parental consent, and the safe harbor provision. In response the AGs strongly recommend that, while the FTC should “significantly” strengthen COPPA, any changes must be flexible and evolve to meet a rapidly-changing data landscape’s needs. Specifically, the AGs state that COPPA’s definition of “web site or online service directed to children,” as well as its definition of an “operator,” need to be modified, as many first-party platforms embed third parties who allegedly engage in the majority of the privacy-invasive online tracking. By expanding the definition of an operator, the AGs claim that COPPA would require compliance by companies that use and profit from the data as well as companies that collect the data. According to the AGs, COPPA, places a lower burden on third-parties and requires them to be bound by the rule only when they have “actual knowledge” that they are tracking children, even though these entities “are arguably as well-positioned as the operators of the websites and online services to know that they are tracking and monitoring children.”
The AGs also believe that the prong that “recognizes the child-directed nature of the content” should be strengthened, because companies that are able to identify and target consumers through sophisticated algorithms are often disincentivized to use the information to affirmatively identify child-directed websites or other online services. Among other things, the AGs also discuss the need for specifying the appropriate methods used for determining a user’s age, expanding COPPA to protect minors’ biometric data, and providing illustrative security requirements.
On November 19, Neustar released a report showing a 241 percent increase in Distributed Denial of Service (DDoS) attacks in 3Q 2019 versus 3Q 2018. Notably, a couple of new and emerging methods of DDoS attacks have emerged, including:
- DDoS reflection/amplification attacks take advantage of IP spoofing techniques to return large amounts of information in response to a small request;
- Exploitation of Apple Remote Management technology;
- Exploitation of Web Service Dynamic Discovery (WS-DD), which has been increasingly used by IoT devices, including security devices and cameras.
Although the financial sector is not necessarily the prime sector for non-state actor DDoS attacks, it remains particularly susceptible as critical infrastructure in the context of state-supported or state-sponsored cyberattacks, which generally maintain advanced persistent threats or APTs and more sophisticated attack methods.
Why is this important. The NYDFS Cybersecurity Regulations (Regulations) and the FTC proposed Safeguards Rule (Rules), previously covered by InfoBytes here, have imposed (or may impose in the future) technical cybersecurity standards (in addition to blanket statements about “reasonable security measures”) for covered entities, such as multi-factor authentication, encryption, and annual penetration testing, among other things. Although the Rules and the Regulations are not the first regulations to impose technical standards (for example, Massachusetts’ standards for the protection of personal information under 201 Mass. Code Regs. 17.01 et seq.), the Rules and Regulations are the first to embed the CIA Triad as a core cybersecurity principle into the definition of “Cybersecurity Event” and “Security Event,” respectively. The CIA Triad represents the core objectives of cybersecurity, which are confidentiality, integrity, and availability.
Implications for Financial Institutions. Geopolitical developments can often give rise to an increase in cyberattacks designed to disrupt, degrade, deny, or destroy information systems without stealing a single byte of information. Institutions that have built their information security plan solely around “security” and “confidentiality” principles may want to consider reviewing and updating risk assessments, plans, and procedures, and, if applicable, expand them to include availability threats, especially with respect to incident response operations and plans (as well as disaster recovery operations), as may be required under the proposed Rules.
For NYDFS, cybersecurity events are 72 hour reportable events, so a DDoS attack, if significant, could represent a reportable event and potential follow up, even if no PII was lost.
On November 22, the New York Senate’s Committee on Consumer Protection and Committee on Internet and Technology held a joint hearing titled, “Consumer Data and Privacy on Online Platforms,” which discussed the proposed New York Privacy Act, SB S5642 (the Act). The Act was introduced in May and seeks to regulate the storage, use, disclosure, and sale of consumer personal data by entities that conduct business in New York State or produce products or services that are intentionally targeted to residents of New York State. The Act contains different provisions than the California Consumer Privacy Act (CCPA), which is set to take effect on January 1, 2020 (visit here for InfoBytes coverage on the CCPA). Highlights of the Act include:
- Fiduciary Duty. Most notably, the Act requires that legal entities “shall act in the best interests of the consumer, without regard to the interests of the entity, controller or data broker, in a manner expected by a reasonable consumer under the circumstances.” Specifically, the Act states that personal data of consumers “shall not be used, processed or transferred to a third party, unless the consumer provides express and documented consent.” The Act imposes a duty of care on every legal entity, or affiliate of a legal entity, with respect to securing consumer personal data against privacy risk and requires prompt disclosure of any unauthorized access. Moreover, the Act requires that legal entities enter into a contract with third parties imposing the same duty of care for consumer personal data prior disclosing, selling, or sharing the data with that party.
- Consumer Rights. The Act requires covered entities to provide consumers notice of their rights under the Act and provide consumers with the opportunity to opt-in or opt-out of the “processing of their personal data” using a method where the consumer must clearly select and indicate their consent or denial. Upon request, and without undue delay, covered entities are required to correct inaccurate personal data or delete personal data.
- Transparency. The Act requires covered entities to make a “clear, meaningful privacy notice” that is “in a form that is reasonably accessible to consumers,” which should include: the categories of personal data to be collected; the purpose for which the data is used and disclosed to third parties; the rights of the consumer under the Act; the categories of data shared with third parties; and the names of third parties with whom the entity shares data. If the entity sells personal data or processes data for direct marketing purposes, it must disclose the processing, as well as the manner in which a consumer may object to the processing.
- Enforcement. The Act defines violations as an unfair or deceptive act in trade or commerce, as well as, an unfair method of competition. The Act allows for the attorney general to bring an action for violations and also prescribes a private right of action on any harmed individual. Covered entities are subject to injunction and liable for damages and civil penalties.
According to reports, state lawmakers at the November hearing indicated that federal requirements would be “the best scenario,” but in the absence of Congressional movement in the area, one state senator noted that the state legislators must “assure [their] constituents that [the state legislature is] doing everything possible to protect their privacy.” Witnesses expressed concern that the Act would be placing too many new requirements on businesses that differ from what other states have already enacted, and encouraged more consistent baseline standards for compliance instead of a patchwork approach. Some witnesses expressed specific concern with the opt-in requirement for the collection and use of consumer data, noting that waiting on consumers to opt-in, as opposed to just opting-out, makes compliance difficult to administer. Lastly, many witnesses were displeased about the broad private right of action in the Act, but consumer groups praised the provision, noting that the state attorney general does not have the resources to regulate and enforce against all the data collection and sharing in the state.
The FTC Safeguards Rule, FFIEC Cybersecurity and IT Guidance, and other OCC guidelines (here and here) emphasize the need for cyber threat intelligence (CIT) and threat identification to inform an organization’s overall cyber risk identification, assessment, and mitigation program. Indeed, to successfully implement a risk-based information security program, an organization must be aware of both general cybersecurity risks across all industries, as well as both business-sector risks and organizational risks unique to the organization. Furthermore, proposed revisions to the FTC Safeguards Rule (previously covered by InfoBytes here) emphasize the need for a “through and complete risk assessment” that is informed by “possible vectors through which the security, confidentiality, and integrity of that information could be threatened.”
Threat modeling is generally understood as a formal process by which an organization identifies specific cyber threats to an organization’s information systems and sensitive information, which provides the management insight regarding the defenses needed; the critical risk areas within and across an information system, network, or business process; and the best allocation of scarce resources to address the critical risks. Even today, generally an accepted threat modeling process involves comprehensive system, application, and network mapping and data flow diagrams. Many threat modeling tools are available free to the public, such as Microsoft’s Threat Modeling Tool, which provides diagramming and analytical resources for network and data flow diagrams, utilizing the STRIDE model (spoofing, tampering, repudiation, information disclosure, denial of service, and escalation of privilege) to inform the user of general cyber-attack vectors that each organization should consider. Generally, between cybersecurity frameworks, such as the NIST Cybersecurity Framework (for risk-based analytical approaches), and threat modeling tools identifying generic cyber threats such as STRIDE (for general or sector-specific cyber risks), an organization can achieve a risk-informed information security program.
However, with the increasing amount of large-scale data breaches occurring and with the evolving complexity of cybersecurity threats, many regulatory agencies and other industry-based standards institutions have called for a need to go one step further and understand the techniques, tactics, and procedures (TTPs) utilized by hackers using CIT. By using CIT and other threat-based models, organizations can gain insight into potential attack vectors through red-teaming and penetration testing by simulating each phase of a hypothetical attack into the organization’s information system and determine potential countermeasures that can be employed at each step of the kill chain. For instance, Lockheed Martin’s formal kill chain model involves seven steps (reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objective) and proposes six potential defensive measures at each step (detect, deny, disrupt, degrade, deceive, and contain). Consequently, an organization can layer its defenses along each step in the kill chain to increase the probability of detection or prevention of the attack. Kill Chain was used as part of a U.S. Senate investigation into the data breach of a major corporation in 2013, identifying several stages along the chain where the attack could have been prevented or detected.
This threat identification process requires greater detail on adversarial TTPs. Fortunately, MITRE has provided for public consumption its ATT&CK (adversarial tactics, techniques, and common knowledge) platform. ATT&CK collects and streamlines adversarial TTPs in specific detail and provides information on each technique and potential mitigating procedures, including commonly used attack patterns for each. For instance, one tactic identified by ATT&CK is to encrypt data being exfiltrated to avoid detection by data loss prevention (DLP) tools or other network anomaly detection tools and identifies more than forty known techniques and tools that have been used to achieve encrypted transmission. ATT&CK also identifies potential detection and mitigation options, such as scanning unencrypted channels for encrypted files using DLP or intrusion detection software. Thus, instead of a generic data breach risk analysis, organizations can understand specific TTPs that may make data breach detection and analysis more difficult, and possibly take measures to prevent it.
By leveraging open-source CIT from tools such as ATT&CK and other reports from third-party sources such as government and industry alerts, organizations can begin the process of designing proactive defenses against cyber threats. It is important to note, however, that ATT&CK can only inform an organization’s threat modeling, and is not a threat model itself; additionally, ATT&CK focuses on penetration and hacking TTPs and, therefore, does not examine other threats that organizations may face, including distributed denial of services (DDoS) attacks that threaten the availability of its systems. Such threats will still need to be accounted for in any financial organization’s risk assessment, particularly if such DDoS prevent its clients from accessing their financial accounts and ultimately, their money.
On October 30, the U.K. Information Commissioner’s Office (ICO) announced an agreement reached between the ICO and a social media company that resolves an investigation into the company’s alleged misuse of personal data. The company has agreed to withdraw its appeal of the £500,000 penalty issued last year under section 55A of the Data Protection Act 1998 (DPA) and settle the case without an admission of guilt. The investigation stems from a data incident affecting upwards of 87 million users worldwide that included the processing of personal data about U.K. users in the context of a U.K. establishment. According to the ICO, the company violated principles of the DPA by (i) unfairly processing personal data; and (ii) failing “to take appropriate technical and organi[z]ational measures against unauthori[z]ed or unlawful processing of personal data.” The ICO published a statement by the company’s associate general counsel in which he noted that the company has “made major changes” to its platform that significantly restricts the information accessible to app developers, and that “[p]rotecting people’s information and privacy is a top priority for [the company].”
On October 21, the National Institute for Standards and Technology (NIST) released the second revision of its Big Data Interoperability Framework (NBDIF), which aims to “develop consensus on important, fundamental concepts related to Big Data” with the understanding that Big Data systems have the potential to “overwhelm traditional technical approaches,” to include traditional approaches regarding privacy and data security. Modest updates were made to Volume 4 of the NBDIF, which focuses on privacy and data security, including recommending a layered approach to Big Data system transparency. With respect to transparency, Volume 4 introduces three levels, starting from level 1, which involves a System Communicator that “provides online explanations to users or stakeholders” discussing how information is processed and retained in a Big Data system, as well as records of “what has been disclosed, accepted, or rejected.” And at the most mature levels, transparency includes developing digital ontologies (multi-level architecture for digital data management) across domain-specific Big Data systems to enable adaptable privacy and security configurations based on user characteristics and populations. Largely intact, however, are the Big Data Safety Levels, in Appendix A which are voluntary (standalone) standards regarding best practices for privacy and data security in Big Data systems, and include application security, business continuity, and transparency aspects.
Buckley Special Alert
Last week, the California attorney general released the highly anticipated proposed regulations implementing the California Consumer Privacy Act (CCPA). The CCPA — which was enacted in June 2018 (covered by a Buckley Special Alert), amended several times and with the most recent amendments signed into law on Oct. 11, and is currently set to take effect on Jan. 1, 2020 — directed the California attorney general to issue regulations to further the law’s purpose.
* * *
If you have any questions about the CCPA or other related issues, please visit our Privacy, Cyber Risk & Data Security practice page, or contact a Buckley attorney with whom you have worked in the past.
- Andrew W. Schilling to moderate "Expectations of in-house counsel from their law firm partners" at the ACI's 7th Annual Advanced Forum on False Claims and Qui Tam
- Buckley Webcast: Tips for navigating changes to the FHA recertification process
- Daniel P. Stipano to discuss "A 20/20 view on 2020’s legislative and regulatory outlook" at the ACAMS Anti-Financial Crime and Public Policy Conference
- Kari K. Hall and Michelle L. Rogers to discuss "Overdrafts and regulatory trends" at the CLE Alabama Banking Law Update
- Kathryn L. Ryan to discuss "Industry open forum session on NMLS usage" at the NMLS Annual Conference & Training
- Kathryn L. Ryan to discuss "Regulating innovative consumer lending products" at the NMLS Annual Conference & Training
- Daniel P. Stipano to moderate "Washington update" at the 17th Puerto Rican Symposium of Anti Money Laundering 2020 conference
- APPROVED Checkpoint Webcast: CFL overview
- Daniel P. Stipano to discuss "Pathway of the SARs: Tracking trajectories of suspicious activity reports from alerts to prosecution" at the ACAMS moneylaundering.com 25th Annual International AML & Financial Crime Conference
- Daniel P. Stipano to discuss "Which bud’s for you? A deep-dive into evolving marijuana laws" at the ACAMS moneylaundering.com 25th Annual International AML & Financial Crime Conference