Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On November 19, Neustar released a report showing a 241 percent increase in Distributed Denial of Service (DDoS) attacks in 3Q 2019 versus 3Q 2018. Notably, a couple of new and emerging methods of DDoS attacks have emerged, including:
- DDoS reflection/amplification attacks take advantage of IP spoofing techniques to return large amounts of information in response to a small request;
- Exploitation of Apple Remote Management technology;
- Exploitation of Web Service Dynamic Discovery (WS-DD), which has been increasingly used by IoT devices, including security devices and cameras.
Although the financial sector is not necessarily the prime sector for non-state actor DDoS attacks, it remains particularly susceptible as critical infrastructure in the context of state-supported or state-sponsored cyberattacks, which generally maintain advanced persistent threats or APTs and more sophisticated attack methods.
Why is this important. The NYDFS Cybersecurity Regulations (Regulations) and the FTC proposed Safeguards Rule (Rules), previously covered by InfoBytes here, have imposed (or may impose in the future) technical cybersecurity standards (in addition to blanket statements about “reasonable security measures”) for covered entities, such as multi-factor authentication, encryption, and annual penetration testing, among other things. Although the Rules and the Regulations are not the first regulations to impose technical standards (for example, Massachusetts’ standards for the protection of personal information under 201 Mass. Code Regs. 17.01 et seq.), the Rules and Regulations are the first to embed the CIA Triad as a core cybersecurity principle into the definition of “Cybersecurity Event” and “Security Event,” respectively. The CIA Triad represents the core objectives of cybersecurity, which are confidentiality, integrity, and availability.
Implications for Financial Institutions. Geopolitical developments can often give rise to an increase in cyberattacks designed to disrupt, degrade, deny, or destroy information systems without stealing a single byte of information. Institutions that have built their information security plan solely around “security” and “confidentiality” principles may want to consider reviewing and updating risk assessments, plans, and procedures, and, if applicable, expand them to include availability threats, especially with respect to incident response operations and plans (as well as disaster recovery operations), as may be required under the proposed Rules.
For NYDFS, cybersecurity events are 72 hour reportable events, so a DDoS attack, if significant, could represent a reportable event and potential follow up, even if no PII was lost.
On November 22, the New York Senate’s Committee on Consumer Protection and Committee on Internet and Technology held a joint hearing titled, “Consumer Data and Privacy on Online Platforms,” which discussed the proposed New York Privacy Act, SB S5642 (the Act). The Act was introduced in May and seeks to regulate the storage, use, disclosure, and sale of consumer personal data by entities that conduct business in New York State or produce products or services that are intentionally targeted to residents of New York State. The Act contains different provisions than the California Consumer Privacy Act (CCPA), which is set to take effect on January 1, 2020 (visit here for InfoBytes coverage on the CCPA). Highlights of the Act include:
- Fiduciary Duty. Most notably, the Act requires that legal entities “shall act in the best interests of the consumer, without regard to the interests of the entity, controller or data broker, in a manner expected by a reasonable consumer under the circumstances.” Specifically, the Act states that personal data of consumers “shall not be used, processed or transferred to a third party, unless the consumer provides express and documented consent.” The Act imposes a duty of care on every legal entity, or affiliate of a legal entity, with respect to securing consumer personal data against privacy risk and requires prompt disclosure of any unauthorized access. Moreover, the Act requires that legal entities enter into a contract with third parties imposing the same duty of care for consumer personal data prior disclosing, selling, or sharing the data with that party.
- Consumer Rights. The Act requires covered entities to provide consumers notice of their rights under the Act and provide consumers with the opportunity to opt-in or opt-out of the “processing of their personal data” using a method where the consumer must clearly select and indicate their consent or denial. Upon request, and without undue delay, covered entities are required to correct inaccurate personal data or delete personal data.
- Transparency. The Act requires covered entities to make a “clear, meaningful privacy notice” that is “in a form that is reasonably accessible to consumers,” which should include: the categories of personal data to be collected; the purpose for which the data is used and disclosed to third parties; the rights of the consumer under the Act; the categories of data shared with third parties; and the names of third parties with whom the entity shares data. If the entity sells personal data or processes data for direct marketing purposes, it must disclose the processing, as well as the manner in which a consumer may object to the processing.
- Enforcement. The Act defines violations as an unfair or deceptive act in trade or commerce, as well as, an unfair method of competition. The Act allows for the attorney general to bring an action for violations and also prescribes a private right of action on any harmed individual. Covered entities are subject to injunction and liable for damages and civil penalties.
According to reports, state lawmakers at the November hearing indicated that federal requirements would be “the best scenario,” but in the absence of Congressional movement in the area, one state senator noted that the state legislators must “assure [their] constituents that [the state legislature is] doing everything possible to protect their privacy.” Witnesses expressed concern that the Act would be placing too many new requirements on businesses that differ from what other states have already enacted, and encouraged more consistent baseline standards for compliance instead of a patchwork approach. Some witnesses expressed specific concern with the opt-in requirement for the collection and use of consumer data, noting that waiting on consumers to opt-in, as opposed to just opting-out, makes compliance difficult to administer. Lastly, many witnesses were displeased about the broad private right of action in the Act, but consumer groups praised the provision, noting that the state attorney general does not have the resources to regulate and enforce against all the data collection and sharing in the state.
The FTC Safeguards Rule, FFIEC Cybersecurity and IT Guidance, and other OCC guidelines (here and here) emphasize the need for cyber threat intelligence (CIT) and threat identification to inform an organization’s overall cyber risk identification, assessment, and mitigation program. Indeed, to successfully implement a risk-based information security program, an organization must be aware of both general cybersecurity risks across all industries, as well as both business-sector risks and organizational risks unique to the organization. Furthermore, proposed revisions to the FTC Safeguards Rule (previously covered by InfoBytes here) emphasize the need for a “through and complete risk assessment” that is informed by “possible vectors through which the security, confidentiality, and integrity of that information could be threatened.”
Threat modeling is generally understood as a formal process by which an organization identifies specific cyber threats to an organization’s information systems and sensitive information, which provides the management insight regarding the defenses needed; the critical risk areas within and across an information system, network, or business process; and the best allocation of scarce resources to address the critical risks. Even today, generally an accepted threat modeling process involves comprehensive system, application, and network mapping and data flow diagrams. Many threat modeling tools are available free to the public, such as Microsoft’s Threat Modeling Tool, which provides diagramming and analytical resources for network and data flow diagrams, utilizing the STRIDE model (spoofing, tampering, repudiation, information disclosure, denial of service, and escalation of privilege) to inform the user of general cyber-attack vectors that each organization should consider. Generally, between cybersecurity frameworks, such as the NIST Cybersecurity Framework (for risk-based analytical approaches), and threat modeling tools identifying generic cyber threats such as STRIDE (for general or sector-specific cyber risks), an organization can achieve a risk-informed information security program.
However, with the increasing amount of large-scale data breaches occurring and with the evolving complexity of cybersecurity threats, many regulatory agencies and other industry-based standards institutions have called for a need to go one step further and understand the techniques, tactics, and procedures (TTPs) utilized by hackers using CIT. By using CIT and other threat-based models, organizations can gain insight into potential attack vectors through red-teaming and penetration testing by simulating each phase of a hypothetical attack into the organization’s information system and determine potential countermeasures that can be employed at each step of the kill chain. For instance, Lockheed Martin’s formal kill chain model involves seven steps (reconnaissance, weaponization, delivery, exploitation, installation, command and control, and actions on objective) and proposes six potential defensive measures at each step (detect, deny, disrupt, degrade, deceive, and contain). Consequently, an organization can layer its defenses along each step in the kill chain to increase the probability of detection or prevention of the attack. Kill Chain was used as part of a U.S. Senate investigation into the data breach of a major corporation in 2013, identifying several stages along the chain where the attack could have been prevented or detected.
This threat identification process requires greater detail on adversarial TTPs. Fortunately, MITRE has provided for public consumption its ATT&CK (adversarial tactics, techniques, and common knowledge) platform. ATT&CK collects and streamlines adversarial TTPs in specific detail and provides information on each technique and potential mitigating procedures, including commonly used attack patterns for each. For instance, one tactic identified by ATT&CK is to encrypt data being exfiltrated to avoid detection by data loss prevention (DLP) tools or other network anomaly detection tools and identifies more than forty known techniques and tools that have been used to achieve encrypted transmission. ATT&CK also identifies potential detection and mitigation options, such as scanning unencrypted channels for encrypted files using DLP or intrusion detection software. Thus, instead of a generic data breach risk analysis, organizations can understand specific TTPs that may make data breach detection and analysis more difficult, and possibly take measures to prevent it.
By leveraging open-source CIT from tools such as ATT&CK and other reports from third-party sources such as government and industry alerts, organizations can begin the process of designing proactive defenses against cyber threats. It is important to note, however, that ATT&CK can only inform an organization’s threat modeling, and is not a threat model itself; additionally, ATT&CK focuses on penetration and hacking TTPs and, therefore, does not examine other threats that organizations may face, including distributed denial of services (DDoS) attacks that threaten the availability of its systems. Such threats will still need to be accounted for in any financial organization’s risk assessment, particularly if such DDoS prevent its clients from accessing their financial accounts and ultimately, their money.
On October 30, the U.K. Information Commissioner’s Office (ICO) announced an agreement reached between the ICO and a social media company that resolves an investigation into the company’s alleged misuse of personal data. The company has agreed to withdraw its appeal of the £500,000 penalty issued last year under section 55A of the Data Protection Act 1998 (DPA) and settle the case without an admission of guilt. The investigation stems from a data incident affecting upwards of 87 million users worldwide that included the processing of personal data about U.K. users in the context of a U.K. establishment. According to the ICO, the company violated principles of the DPA by (i) unfairly processing personal data; and (ii) failing “to take appropriate technical and organi[z]ational measures against unauthori[z]ed or unlawful processing of personal data.” The ICO published a statement by the company’s associate general counsel in which he noted that the company has “made major changes” to its platform that significantly restricts the information accessible to app developers, and that “[p]rotecting people’s information and privacy is a top priority for [the company].”
On October 21, the National Institute for Standards and Technology (NIST) released the second revision of its Big Data Interoperability Framework (NBDIF), which aims to “develop consensus on important, fundamental concepts related to Big Data” with the understanding that Big Data systems have the potential to “overwhelm traditional technical approaches,” to include traditional approaches regarding privacy and data security. Modest updates were made to Volume 4 of the NBDIF, which focuses on privacy and data security, including recommending a layered approach to Big Data system transparency. With respect to transparency, Volume 4 introduces three levels, starting from level 1, which involves a System Communicator that “provides online explanations to users or stakeholders” discussing how information is processed and retained in a Big Data system, as well as records of “what has been disclosed, accepted, or rejected.” And at the most mature levels, transparency includes developing digital ontologies (multi-level architecture for digital data management) across domain-specific Big Data systems to enable adaptable privacy and security configurations based on user characteristics and populations. Largely intact, however, are the Big Data Safety Levels, in Appendix A which are voluntary (standalone) standards regarding best practices for privacy and data security in Big Data systems, and include application security, business continuity, and transparency aspects.
Buckley Special Alert
Last week, the California attorney general released the highly anticipated proposed regulations implementing the California Consumer Privacy Act (CCPA). The CCPA — which was enacted in June 2018 (covered by a Buckley Special Alert), amended several times and with the most recent amendments signed into law on Oct. 11, and is currently set to take effect on Jan. 1, 2020 — directed the California attorney general to issue regulations to further the law’s purpose.
* * *
If you have any questions about the CCPA or other related issues, please visit our Privacy, Cyber Risk & Data Security practice page, or contact a Buckley attorney with whom you have worked in the past.
On October 10, the California attorney general released the highly anticipated proposed regulations implementing the California Consumer Privacy Act (CCPA). The CCPA—which was enacted in June 2018 (covered by a Buckley Special Alert), amended in September 2018, amended again in October 2019 (pending Governor Gavin Newsom’s signature), and is currently set to take effect on January 1, 2020 (Infobytes coverage on the amendments available here and here)—directed the California attorney general to issue regulations to further the law’s purpose. The proposed regulations address a variety of topics related to the law, including:
- The handling of consumer requests made under the CCPA, such as requests to know, requests to delete, and requests to opt-out;
- Service provider classification and obligations;
- The process for verifying consumer requests;
- Training and recordkeeping requirements; and
- Special requirements related to minors.
The California attorney general will hold four public hearings between December 2 and December 5 on the proposed regulations. Written comments are due by December 6.
Notably, the Notice of Proposed Rulemaking states that “the adoption of these regulations may have a significant, statewide adverse economic impact directly affecting business, including the ability of California businesses to compete with businesses in other states” and requests that the public consider, among other things, different compliance requirements depending on a business’s resources or potential exemptions from the regulatory requirements for businesses when submitting comments on the proposal.
Buckley will follow up with a more detailed summary of the proposed regulations soon.
On October 1, the European Court of Justice held that, under the Privacy and Electronic Communications Directive (ePrivacy Directive), a website user does not “consent” to the use of a cookie when a website provides a “pre-checked box” that needs to be deselected for a user to withdraw consent. According to the judgment, a consumer group brought an action in German court against a German lottery company, challenging the website’s use of a pre-checked box allowing the website to place a cookie—text files stored on the user’s computer allowing website providers to collect information about a user’s behavior when the user visits the website—unless the consumer deselected the box. The consumer group argued that the pre-selection of the box is not valid consent under the ePrivacy Directive. The lower court had upheld the action in part, but, following an appeal, the German Federal Court of Justice stayed the proceedings and referred the matter to the EU Court of Justice.
On September 25, Alastair Mactaggart, the Founder and Chair of the Californians for Consumer Privacy and the drafter of the initiative that ultimately resulted in the California Consumer Privacy Act (CCPA), announced a newly filed ballot measure to further expand the CCPA (currently effective on January 1, 2020), titled the “California Privacy Rights and Enforcement Act of 2020” (the Act) (an additional version of the Act is available with comments from McTaggart’s team). The Act would result in significant amendments to the CCPA, including the following, among others
- Sensitive personal information. The Act sets forth additional obligations in connection with a business’s collection, use, sale, or disclosure of “sensitive personal information,” which is a new term introduced by the Act. “Sensitive personal information” includes categories such as health information; financial information (stated as, “a consumer’s account log-in, financial account, debit card, or credit card number in combination with any required security or access code, password, or credentials allowing access to an account”); racial or ethnic origin; precise geolocation; or other data collected and analyzed for the purpose of identifying such information.
- Disclosure of sensitive personal information. The Act expands on the CCPA’s disclosure requirements to include, among other things, a requirement for businesses to specify the categories of sensitive personal information that will be collected, disclose the specific purposes for which the categories of sensitive personal information are collected or used, and disclose whether such information is sold. In addition, the Act prohibits a business from collecting additional categories of sensitive personal information or use sensitive personal information collected for purposes that are incompatible with the disclosed purpose for which the information was collected, or other disclosed purposes reasonably related to the original purpose for which the information was collected, unless notice is provided to the consumer.
- Contractual requirements. The Act sets forth additional contractual requirements and obligations that apply when a business sells personal information to a third party or discloses personal information to a service provider or contractor for a business purpose. Among other things, the Act obligates the third party, service provider, or contractor to provide at least the same level of privacy protection required by the Act. The contract must also require the third party, service provider, or contractor to notify the business if it makes a determination that it can no longer meet its obligation to protect the personal information as required by the Act.
- Advertising and marketing opt-out. The Act includes a consumer’s right to opt-out, at any time, of the business’s use of their sensitive personal information for advertising and marketing or disclosure of personal information to a service provider or contractor for the same purposes. The Act requires that businesses provide notice to consumers that their sensitive personal information may be used or disclosed for advertising or marketing purposes and that the consumers have “the right to opt-out” of its use or disclosure. “Advertising and marketing” means a communication by a business or a person acting on the business’s behalf in any medium intended to induce a consumer to buy, rent, lease, join, use, subscribe to, apply for, provide, or exchange products, goods, property, information, services, or employment.
- Affirmative consent for sale of sensitive personal information. The Act expands on the CCPA’s opt-out provisions and prohibits businesses from selling a consumer’s sensitive personal information without actual affirmative authorization.
- Right to correct inaccurate information. The Act provides consumers with the right to require a business to correct inaccurate personal information.
- Definition of business. The Act revises the definition of “business” to:
- Clarify that the time period for calculating annual gross revenues is based on the prior calendar year;
- Provide that an entity meets the definition of “business” if the entity, in relevant part, alone or in combination, annually buys the personal information of 100,000 or more consumers or households;
- Include a joint venture or partnership composed of business in which each business has at least a 40% interest; and
- Provides a catch-all for businesses not covered by the foregoing bullets.
- The “California Privacy Protection Agency.” The Act creates the California Privacy Protection Agency, which would have the power, authority, and jurisdiction to implement and enforce the CCPA (powers that are currently vested in the attorney general). The Act states that the Agency would have five members, including a single Chair, and the members would be appointed by the governor, the attorney general, and the leaders of the senate and assembly.
If passed, the Act would become operative on January 1, 2021 and would apply to personal information collected by a business on or after January 1, 2020.
As previously covered by a Buckley Special Alert, on September 13, lawmakers in California passed numerous amendments to the CCPA, which are awaiting Governor Gavin Newsom’s signature, who has until October 13 to sign. The amendments leave the majority of the consumer’s rights intact, but certain provisions were clarified — including the definition of “personal information” — while other exemptions were clarified regarding the collection of certain data that have a bearing on financial services companies.
On September 6, the National Institute of Standards and Technology (NIST) released a preliminary draft of the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management to help organizations assess and reduce risks. The draft framework is designed to align with NIST’s Cybersecurity Framework (previously covered by InfoBytes here), which provides guidance that critical infrastructures, including the financial services industry, should voluntarily follow to mitigate cybersecurity risk. The draft framework establishes three components to reinforce privacy risk management: (i) the “Core” describes a set of privacy activities and outcomes used to manage risks that arise from data processing or are associated with privacy breaches; (ii) “Profiles” cover an organization’s current privacy activities or desired outcomes that have been prioritized to manage privacy risk; and (iii) “Implementation Tiers” address how organizations see privacy risk, and whether they have sufficient processes and resources in place to manage that risk. According to NIST, “Finding ways to continue to derive benefits from data while simultaneously protecting individuals’ privacy is challenging, and not well-suited to one-size-fits-all solutions.” Public comments will be accepted through October 24.
- Daniel P. Stipano to discuss “Connecting the dots on your CDD program” at the ABA/ABA Financial Crimes Enforcement Conference
- Daniel P. Stipano to discuss “Beneficial Ownership: You have questions – We have quick answers” at the ABA/ABA Financial Crimes Enforcement Conference
- Moorari K. Shah to discuss "Legal & regulatory issues – Next wave of regulatory policy" at the Marketplace Lending & Alternative Financing Summit
- Daniel P. Stipano to discuss "Risk management in enforcement actions: Managing risk or micromanaging it" at an American Bar Association webinar
- Kari K. Hall and Christopher M. Walczyszyn to speak on the "Understanding updates to Regulation CC to ensure effective check processing" at a National Association of Federal Credit Unions webinar
- APPROVED Webcast: Periodic reporting made easier
- Daniel P. Stipano to discuss "A 20/20 view on 2020’s legislative and regulatory outlook" at the ACAMS Anti-Financial Crime and Public Policy Conference