Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On December 9, a coalition of 25 state attorneys general responded to the FTC’s request for comments on a wide range of issues related to the Children’s Online Privacy Protection Rule (COPPA). As previously covered by InfoBytes, the FTC released a notice in July seeking comments on all major provisions of COPPA, including definitions, notice and parental consent requirements, exceptions to verifiable parental consent, and the safe harbor provision. In response the AGs strongly recommend that, while the FTC should “significantly” strengthen COPPA, any changes must be flexible and evolve to meet a rapidly-changing data landscape’s needs. Specifically, the AGs state that COPPA’s definition of “web site or online service directed to children,” as well as its definition of an “operator,” need to be modified, as many first-party platforms embed third parties who allegedly engage in the majority of the privacy-invasive online tracking. By expanding the definition of an operator, the AGs claim that COPPA would require compliance by companies that use and profit from the data as well as companies that collect the data. According to the AGs, COPPA, places a lower burden on third-parties and requires them to be bound by the rule only when they have “actual knowledge” that they are tracking children, even though these entities “are arguably as well-positioned as the operators of the websites and online services to know that they are tracking and monitoring children.”
The AGs also believe that the prong that “recognizes the child-directed nature of the content” should be strengthened, because companies that are able to identify and target consumers through sophisticated algorithms are often disincentivized to use the information to affirmatively identify child-directed websites or other online services. Among other things, the AGs also discuss the need for specifying the appropriate methods used for determining a user’s age, expanding COPPA to protect minors’ biometric data, and providing illustrative security requirements.
On December 13, the U.S. District Court for the District of Maryland denied an international hospitality company’s motion to dismiss a data breach suit brought by the City of Chicago. According to the city’s complaint, the company violated the Illinois Consumer Fraud and Deceptive Business Practices Act by, among other things, allegedly failing to (i) “protect Chicago residents’ personal information”; (ii) implement and maintain reasonable security measures; (iii) disclose that it did not maintain reasonable security measures; and (iv) provide “prompt notice” of the breach to Chicago residents. According to the opinion, the city had established standing to sue the company because it adequately alleged injury to its municipal interests. Additionally, the court rejected the company’s assertion that the suit is unconstitutional under the Illinois Constitution, stating that the consumer protection ordinance the company was alleged to have violated “addresses a local problem, making it a legitimate exercise of the City’s home rule authority” under the state’s constitution. The company had released a statement in November 2018, which is at the center of the city’s action, stating that the breach was discovered in September 2018, had exposed personal information from 500 million guests, and been ongoing since 2014.
On December 6, the FTC issued an unanimous opinion against a British consulting and data analytics firm, finding that the firm violated the FTC Act by engaging in “deceptive practices to harvest personal information from tens of millions of [a social media company’s] users.” The information—which was allegedly collected through an application that told users it would not harvest identifiable information—was then used to target potential voters. The opinion also found that the firm engaged in deceptive practices relating to its participation in the EU-U.S. Privacy Shield framework. The opinion follows an administrative complaint issued against the firm in July (previously covered by InfoBytes here). Under the terms of the administrative final order, the firm is prohibited from misrepresenting “the extent to which it protects the privacy and confidentiality of personal information as well as its participation in the EU-U.S. Privacy Shield framework and other similar regulatory or standard-setting organizations,” and it must apply Privacy Shield protections to personal information collected during its participation in the program or return or delete the information. Among other things, the firm also must delete or destroy the personal information collected from consumers through the app, as well as any other information or work product that originated from the information.
On December 4, the Senate Commerce Committee held a hearing titled “Examining Legislative Proposals to Protect Consumer Data Privacy” to discuss how to “provide consumers with more security, transparency, choice, and control over personal information both online and offline.” Among the issues discussed at the hearing was how consumer privacy rights should be enforced. As previously covered by InfoBytes, some FTC commissioners, at a hearing earlier this year, expressed that authorization to enforce federal privacy laws should vest not only in the FTC, but also in the states’ attorneys general. At the Senate hearing, there was testimony suggesting that the FTC is spread too thin to be in charge of enforcing new privacy laws. At least one witness championed state privacy regulation, while other witnesses endorsed preemption of the state laws by the envisioned federal privacy law. Although different views were expressed regarding what the law should look like, the hearing participants generally seemed to agree that a federal privacy law may be needed now in light of recent state legislative agendas and, as one Senator raised, the growing use of artificial intelligence.
On November 19, Neustar released a report showing a 241 percent increase in Distributed Denial of Service (DDoS) attacks in 3Q 2019 versus 3Q 2018. Notably, a couple of new and emerging methods of DDoS attacks have emerged, including:
- DDoS reflection/amplification attacks take advantage of IP spoofing techniques to return large amounts of information in response to a small request;
- Exploitation of Apple Remote Management technology;
- Exploitation of Web Service Dynamic Discovery (WS-DD), which has been increasingly used by IoT devices, including security devices and cameras.
Although the financial sector is not necessarily the prime sector for non-state actor DDoS attacks, it remains particularly susceptible as critical infrastructure in the context of state-supported or state-sponsored cyberattacks, which generally maintain advanced persistent threats or APTs and more sophisticated attack methods.
Why is this important. The NYDFS Cybersecurity Regulations (Regulations) and the FTC proposed Safeguards Rule (Rules), previously covered by InfoBytes here, have imposed (or may impose in the future) technical cybersecurity standards (in addition to blanket statements about “reasonable security measures”) for covered entities, such as multi-factor authentication, encryption, and annual penetration testing, among other things. Although the Rules and the Regulations are not the first regulations to impose technical standards (for example, Massachusetts’ standards for the protection of personal information under 201 Mass. Code Regs. 17.01 et seq.), the Rules and Regulations are the first to embed the CIA Triad as a core cybersecurity principle into the definition of “Cybersecurity Event” and “Security Event,” respectively. The CIA Triad represents the core objectives of cybersecurity, which are confidentiality, integrity, and availability.
Implications for Financial Institutions. Geopolitical developments can often give rise to an increase in cyberattacks designed to disrupt, degrade, deny, or destroy information systems without stealing a single byte of information. Institutions that have built their information security plan solely around “security” and “confidentiality” principles may want to consider reviewing and updating risk assessments, plans, and procedures, and, if applicable, expand them to include availability threats, especially with respect to incident response operations and plans (as well as disaster recovery operations), as may be required under the proposed Rules.
For NYDFS, cybersecurity events are 72 hour reportable events, so a DDoS attack, if significant, could represent a reportable event and potential follow up, even if no PII was lost.
On December 4, the Financial Stability Oversight Council (FSOC) issued final interpretive guidance to revise and update 2012 guidance concerning nonbank financial company designations. According to Treasury Secretary Steven T. Mnuchin, the guidance “enhances [FSOC’s] ability to identify, assess, and respond to potential risks to U.S. financial stability. . . by promoting careful analysis and creating a more streamlined process.” Among other things, the guidance (i) implements an activities-based approach for identifying, assessing, and addressing potential risks and threats to financial stability in the U.S., allowing FSOC to work with federal and state financial regulators to implement appropriate actions when a potential risk is identified; (ii) enhances the analytic framework for potential nonbank financial company designations, which includes a cost-benefit analysis and a review of the likelihood of a company’s material financial distress determined by its vulnerability to a range of factors; and (iii) enhances the efficiency and effectiveness of the nonbank financial company designation process by condensing the process into two stages and increasing “engagement with and transparency to” companies under review, as well as their regulators, through the creation of pre- and post-designation off ramps.
FSOC also released its 2019 annual report to Congress, which reviews financial market developments, identifies emerging risks, and offers recommendations to enhance financial stability. Key highlights include:
- Cybersecurity. FSOC states that “[g]reater reliance on technology, particularly across a broader array of interconnected platforms, increases the risk that a cybersecurity event will have severe consequences for financial institutions.” Among other things, FSOC recommends continued robust, comprehensive cybersecurity monitoring, and supports the development of public and private partnerships to “increase coordination of cybersecurity examinations across regulatory authorities.”
- Nonbank Mortgage Origination and Servicing. The report adds the increasing share of mortgages held by nonbank mortgage companies to its list of concerns. FSOC notes that of the 25 largest originators and servicers, nonbanks originate roughly 51 percent of mortgages and service approximately 47 percent—a notable increase from 2009 where nonbanks only originated 10 percent of mortgages and serviced just 6 percent. FSOC states that risks in nonbank origination and servicing arise because most nonbanks have limited liquidity as compared to banks and rely more on short-term funding, among other things. FSOC recommends that federal and state regulators continue to coordinate efforts to collect data, identify risks, and strengthen oversight of nonbanks in this space.
- Financial Innovation. The report discusses the benefits of new financial products and practices, but cautions that these may also create new risks and vulnerabilities. FSOC recommends that these products and services—particularly digital assets and distributed ledger technology—should be continually monitored and analyzed to understand their effects on consumers, regulated entities, and financial markets.
On November 22, the New York Senate’s Committee on Consumer Protection and Committee on Internet and Technology held a joint hearing titled, “Consumer Data and Privacy on Online Platforms,” which discussed the proposed New York Privacy Act, SB S5642 (the Act). The Act was introduced in May and seeks to regulate the storage, use, disclosure, and sale of consumer personal data by entities that conduct business in New York State or produce products or services that are intentionally targeted to residents of New York State. The Act contains different provisions than the California Consumer Privacy Act (CCPA), which is set to take effect on January 1, 2020 (visit here for InfoBytes coverage on the CCPA). Highlights of the Act include:
- Fiduciary Duty. Most notably, the Act requires that legal entities “shall act in the best interests of the consumer, without regard to the interests of the entity, controller or data broker, in a manner expected by a reasonable consumer under the circumstances.” Specifically, the Act states that personal data of consumers “shall not be used, processed or transferred to a third party, unless the consumer provides express and documented consent.” The Act imposes a duty of care on every legal entity, or affiliate of a legal entity, with respect to securing consumer personal data against privacy risk and requires prompt disclosure of any unauthorized access. Moreover, the Act requires that legal entities enter into a contract with third parties imposing the same duty of care for consumer personal data prior disclosing, selling, or sharing the data with that party.
- Consumer Rights. The Act requires covered entities to provide consumers notice of their rights under the Act and provide consumers with the opportunity to opt-in or opt-out of the “processing of their personal data” using a method where the consumer must clearly select and indicate their consent or denial. Upon request, and without undue delay, covered entities are required to correct inaccurate personal data or delete personal data.
- Transparency. The Act requires covered entities to make a “clear, meaningful privacy notice” that is “in a form that is reasonably accessible to consumers,” which should include: the categories of personal data to be collected; the purpose for which the data is used and disclosed to third parties; the rights of the consumer under the Act; the categories of data shared with third parties; and the names of third parties with whom the entity shares data. If the entity sells personal data or processes data for direct marketing purposes, it must disclose the processing, as well as the manner in which a consumer may object to the processing.
- Enforcement. The Act defines violations as an unfair or deceptive act in trade or commerce, as well as, an unfair method of competition. The Act allows for the attorney general to bring an action for violations and also prescribes a private right of action on any harmed individual. Covered entities are subject to injunction and liable for damages and civil penalties.
According to reports, state lawmakers at the November hearing indicated that federal requirements would be “the best scenario,” but in the absence of Congressional movement in the area, one state senator noted that the state legislators must “assure [their] constituents that [the state legislature is] doing everything possible to protect their privacy.” Witnesses expressed concern that the Act would be placing too many new requirements on businesses that differ from what other states have already enacted, and encouraged more consistent baseline standards for compliance instead of a patchwork approach. Some witnesses expressed specific concern with the opt-in requirement for the collection and use of consumer data, noting that waiting on consumers to opt-in, as opposed to just opting-out, makes compliance difficult to administer. Lastly, many witnesses were displeased about the broad private right of action in the Act, but consumer groups praised the provision, noting that the state attorney general does not have the resources to regulate and enforce against all the data collection and sharing in the state.
The FTC Safeguards Rule, FFIEC Cybersecurity and IT Guidance, and other OCC guidelines (here and here) emphasize the need for cyber threat intelligence (CIT) and threat identification to inform an organization’s overall cyber risk identification, assessment, and mitigation program. Indeed, to successfully implement a risk-based information security program, an organization must be aware of both general cybersecurity risks across all industries, as well as both business-sector risks and organizational risks unique to the organization. Furthermore, proposed revisions to the FTC Safeguards Rule (previously covered by InfoBytes here) emphasize the need for a “through and complete risk assessment” that is informed by “possible vectors through which the security, confidentiality, and integrity of that information could be threatened.”
Threat modeling is generally understood as a formal process by which an organization identifies specific cyber threats to an organization’s information systems and sensitive information, which provides the management insight regarding the defenses needed; the critical risk areas within and across an information system, network, or business process; and the best allocation of scarce resources to address the critical risks. Even today, generally an accepted threat modeling process involves comprehensive system, application, and network mapping and data flow diagrams. Many threat modeling tools are available free to the public, such as Microsoft’s Threat Modeling Tool, which provides diagramming and analytical resources for network and data flow diagrams, utilizing the STRIDE model (spoofing, tampering, repudiation, information disclosure, denial of service, and escalation of privilege) to inform the user of general cyber-attack vectors that each organization should consider. Generally, between cybersecurity frameworks, such as the NIST Cybersecurity Framework (for risk-based analytical approaches), and threat modeling tools identifying generic cyber threats such as STRIDE (for general or sector-specific cyber risks), an organization can achieve a risk-informed information security program.
However, with the increasing amount of large-scale data breaches occurring and with the evolving complexity of cybersecurity threats, many regulatory agencies and other industry-based standards institutions have called for a need to go one step further and understand the techniques, tactics, and procedures (TTPs) utilized by hackers using CIT. By using CIT and other threat-based models, organizations can gain insight into potential attack vectors through red-teaming and penetration testing by simulating each phase of a hypothetical attack into the organization’s information system and determine potential countermeasures that can be employed at each step of the kill chain. For instance, Lockheed Martin’s formal kill chain model involves seven steps (reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objective) and proposes six potential defensive measures at each step (detect, deny, disrupt, degrade, deceive, and contain). Consequently, an organization can layer its defenses along each step in the kill chain to increase the probability of detection or prevention of the attack. Kill Chain was used as part of a U.S. Senate investigation into the data breach of a major corporation in 2013, identifying several stages along the chain where the attack could have been prevented or detected.
This threat identification process requires greater detail on adversarial TTPs. Fortunately, MITRE has provided for public consumption its ATT&CK (adversarial tactics, techniques, and common knowledge) platform. ATT&CK collects and streamlines adversarial TTPs in specific detail and provides information on each technique and potential mitigating procedures, including commonly used attack patterns for each. For instance, one tactic identified by ATT&CK is to encrypt data being exfiltrated to avoid detection by data loss prevention (DLP) tools or other network anomaly detection tools and identifies more than forty known techniques and tools that have been used to achieve encrypted transmission. ATT&CK also identifies potential detection and mitigation options, such as scanning unencrypted channels for encrypted files using DLP or intrusion detection software. Thus, instead of a generic data breach risk analysis, organizations can understand specific TTPs that may make data breach detection and analysis more difficult, and possibly take measures to prevent it.
By leveraging open-source CIT from tools such as ATT&CK and other reports from third-party sources such as government and industry alerts, organizations can begin the process of designing proactive defenses against cyber threats. It is important to note, however, that ATT&CK can only inform an organization’s threat modeling, and is not a threat model itself; additionally, ATT&CK focuses on penetration and hacking TTPs and, therefore, does not examine other threats that organizations may face, including distributed denial of services (DDoS) attacks that threaten the availability of its systems. Such threats will still need to be accounted for in any financial organization’s risk assessment, particularly if such DDoS prevent its clients from accessing their financial accounts and ultimately, their money.
On November 12, the FTC announced a proposed settlement, which requires a technology service provider to implement a comprehensive data security program to resolve allegations of security failures, which allegedly allowed a hacker to access the sensitive personal information of about one million consumers. According to the complaint, the FTC asserts that the service provider and its former CEO violated the FTC Act by engaging in unreasonable data security practices, including failing to (i) have a systematic process for inventorying and deleting consumers’ sensitive personal information that was no longer necessary to store on its network; (ii) adequately assess the cybersecurity risk posed to consumers’ personal information stored on its network by performing adequate code review of its software and penetration testing; (iii) detect malicious file uploads by implementing protections such as adequate input validation; (iv) adequately limit the locations to which third parties could upload unknown files on its network and segment the network to ensure that one client’s distributors could not access another client’s data on the network; and (v) implement safeguards to detect abnormal activity and/or cybersecurity events. The FTC further alleges in its complaint that the provider could have addressed each of the failures described above “by implementing readily available and relatively low-cost security measures.”
The FTC alleges more particularly that, between May 2014 and March 2016, an unauthorized intruder accessed the service provider’s server over 20 times, and in March 2016, “accessed personal information of approximately one million consumers, including: full names; physical addresses; email addresses; telephone numbers; SSNs; distributor user IDs and passwords; and admin IDs and passwords.” Because the information obtained can be used to commit identity theft and fraud, the FTC alleged that the service provider’s failure to implement reasonable security measures violated the FTC’s prohibition against unfair practices.
The proposed settlement requires the service provider to, among other things, create certain records and obtain third-party assessments of its information security program every two years for the 20 years following the issuance of the related order that would result from the settlement.
On October 30, the U.K. Information Commissioner’s Office (ICO) announced an agreement reached between the ICO and a social media company that resolves an investigation into the company’s alleged misuse of personal data. The company has agreed to withdraw its appeal of the £500,000 penalty issued last year under section 55A of the Data Protection Act 1998 (DPA) and settle the case without an admission of guilt. The investigation stems from a data incident affecting upwards of 87 million users worldwide that included the processing of personal data about U.K. users in the context of a U.K. establishment. According to the ICO, the company violated principles of the DPA by (i) unfairly processing personal data; and (ii) failing “to take appropriate technical and organi[z]ational measures against unauthori[z]ed or unlawful processing of personal data.” The ICO published a statement by the company’s associate general counsel in which he noted that the company has “made major changes” to its platform that significantly restricts the information accessible to app developers, and that “[p]rotecting people’s information and privacy is a top priority for [the company].”
- Daniel R. Alonso to discuss "The international compliance situation and new challenges" at the World Compliance Association Covid Compliance Conference
- Benjamin W. Hutten to discuss "Understanding OFAC sanctions" at a NAFCU webinar
- Garylene D. Javier to discuss "Navigating workplace culture in 2020" at the DC Bar Conference