Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
On July 25, the New York governor signed two bills designed to strengthen protections for consumers in the event their private information is compromised in a data breach.
A 5635B, the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act) updates the state’s privacy law by expanding the definition of personal information and broadening the definition of a data breach. Notably, the SHIELD Act applies to any person or entity with access to a New York resident’s private information, regardless of whether or not the company conducts business in the state. Among other provisions, the SHIELD Act:
- Requires all covered entities to adopt and implement “reasonable” administrative, technical, and physical safeguards to protect and dispose of sensitive data, as well as implement “reasonable” administrative safeguards, such as employee training;
- Stipulates that a covered entity that is already regulated by, and in compliance with, certain existing applicable state or federal data security requirements (e.g., Gramm-Leach-Bliley Act, HIPAA, and 23 NYCRR Part 500—NYDFS’ Cybersecurity Regulation) is considered a “compliant regulated entity”;
- Requires entities to promptly notify impacted individuals under new, broadened data breach notification requirements, which now include (i) “access to” private information as a trigger for notification, in addition to the existing “acquired” trigger; and (ii) expanded data types, including biometric data, email addresses, and corresponding passwords or security questions and answers;
- Applies a more flexible standard for small businesses to ease regulatory burdens (qualifying small businesses must have fewer than 50 employees, under $3 million in gross annual revenue, or less than $5 million in assets) and will consider a small business compliant if its “security program contains reasonable administrative, technical and physical safeguards that are appropriate for the size and complexity of the small business” to protect the security, confidentiality, and integrity of private information; and
- Broadens the New York attorney general’s oversight regarding data breaches impacting state residents. The SHIELD Act further stipulates that actions may not be brought under the law’s provisions unless the action is commenced within three years following either the date on which the attorney general received notice of the violation, or the date the notice was sent to affected individuals, whichever occurs first. However, “[i]n no event shall an action be brought after six years from the date of discovery of the breach of private information by the company unless the company took steps to hide the breach.”
The SHIELD Act takes effect March 21, 2020.
A 2374, which was signed into the law the same day, prohibits consumer credit reporting agencies from charging fees to consumers if the agency’s system was involved in a data breach including social security numbers. Credit reporting agencies are required to provide “reasonable identity theft prevention services and, if applicable, identity theft mitigation services for a period not to exceed five years at no cost to such consumers.” The law applies to any breach of security of a consumer credit reporting agency that occurred in the last three years. This measure takes effect September 23.
FTC and DOJ announce $5 billion privacy settlement with social media company; SEC settles for $100 million
On July 24, the FTC and the DOJ officially announced (see here and here) that the world’s largest social media company will pay a $5 billion penalty to settle allegations that it mishandled its users’ personal information. As previously covered by InfoBytes, it was reported on July 12 that the FTC approved the penalty, in a 3-2 vote. This is the largest privacy penalty ever levied by the agency, almost “20 times greater than the largest privacy or data security penalty ever imposed worldwide,” and one of the largest ever assessed by the U.S. government for any violation. According to the complaint, filed the same day as the settlement, the company allegedly used deceptive disclosures and settings to undermine users’ privacy preferences in violation of a 2012 privacy settlement with the FTC, which allowed the company to share users’ data with third-party apps that were downloaded by users’ “friends.” Moreover, the complaint alleges that many users were unaware the company was sharing the information, and therefore did not take the steps needed to opt-out of the sharing. Relatedly, the FTC also announced a separate action against a British consulting and data analytics firm for allegedly using deceptive tactics to “harvest personal information from millions of [the social media company’s] users.”
In addition to the monetary penalty, the 20-year settlement order overhauls the company’s privacy program. Specifically, the order, among other things, (i) establishes an independent privacy committee of the company’s board of directors; (ii) requires the company to designate privacy program compliance officers who can only be removed by the board’s privacy committee; (iii) requires an independent third-party assessor to perform biennial assessments of the company’s privacy program; (iv) requires the company to conduct a specific privacy review of every new or modified product, service, or practice before it is implemented; and (v) mandates that the company report any incidents in which data of 500 or more users have been compromised to the FTC.
In dissenting statements, Commissioner Chopra and Commissioner Slaughter asserted that the settlement, while historic, does not contain terms that would effectively deter the company from engaging in future violations. Commissioner Slaughter argues, among other things, that the civil penalty is insufficient and believes the order should have contained “meaningful limitations on how [the company] collects, uses, and shares data.” Similarly, Commissioner Chopra argues that the order imposes no meaningful changes to the company’s structure or financial incentives, and the immunity provided to the company’s officers and directors is unwarranted.
On the same day, the SEC announced that the company also agreed to pay $100 million to settle allegations that it mislead investors about the risks it faced related to the misuse of its consumer data. The SEC’s complaint alleges that in 2015, the company was aware of the British consulting and data analytics firm’s misuse of its consumer data but did not correct its disclosures for more than two years. Additionally, the SEC alleges the company failed to have policies and procedures in place during that time that would assess the results of internal investigations for the purposes of making accurate disclosures in public filings. The company neither admitted nor denied the allegations.
On July 19, the United Kingdom’s Information Commissioner’s Office (ICO) issued a £80,000 fine against a London-based real estate management company for allegedly leaving over 18,000 customers’ personal data exposed for almost two years. According to the ICO, when the company transferred personal data from its server to a partner organization, the company failed to switch off an “anonymous authentication” function, which exposed all the data—including personal data such as bank statements, salary details, copies of passports, dates of birth, and addresses—stored between March 2015 and February 2017. The ICO alleges that the company failed to take appropriate technical and organizational measures to protect customers’ personal data and concluded the failures were “a serious contravention of the 1998 data protection laws which have since been replaced by the [General Data Protection Regulation] GDPR and the Data Protection Act 2018.”
On July 12, it was reported that the FTC has approved a $5 billion penalty against the world’s largest social media company for allegedly mishandling its users’ personal information. The reported settlement would be the largest privacy penalty ever levied by the agency. According to reports, the settlement, which was approved in a 3-2 vote, resolves allegations that the company allowed a British consulting firm access to 87 million users’ personal data for political consulting purposes in violation of a 2012 privacy settlement with the FTC. Neither the FTC nor the social media company have commented on the reported settlement, which is still pending approval from the Department of Justice.
On July 8 and 9, the United Kingdom’s Information Commissioner’s Office (ICO) issued two notices of its intention to fine companies for infringements of the General Data Protection Regulation (GDPR). On July 8, the ICO announced it intended to fine a U.K.-based airline £183.39M for a September 2018 cyber incident, which, due to “poor security arrangements,” allowed attackers to divert user traffic on the airline’s website to a fraudulent site, making consumer details accessible. The airline notified the ICO about the incident, which compromised the data of approximately 500,000 consumers, and has cooperated with the ICO in the investigation and made improvements to its security arrangements. Additionally, on July 9, the ICO announced it intended to fine a multinational hotel chain £99,200,396 for failing to undertake sufficient due diligence when the chain purchased a hotel group in 2016, which had previously exposed 339 million guest records globally in 2014. The exposure was discovered in 2018, and the hotel chain thereafter reported the incident to the ICO, and has cooperated with the investigation and made improvements to its security arrangements. In both announcements, the ICO notes that it will, “consider carefully the representations made by the company and the other concerned data protection authorities” before issuing the final decision.
On July 8, FCC Chairman Ajit Pai proposed rules supported by a bipartisan group of more than 40 state attorneys general that would extend prohibitions against robocalls to caller ID spoofing of text messages and international calls, implementing measures passed last year in the RAY BAUM’s Act. Previously, anti-spoofing prohibitions applied only to domestic robocalls. According to Pai, “Scammers often robocall us from overseas, and when they do, they typically spoof their numbers to try and trick consumers. . . . With these new rules, we’ll close the loopholes that hamstring law enforcement when they try to pursue international scammers and scammers using text messaging.” The FCC will vote on the proposed rules at its August 1 meeting.
As previously covered by InfoBytes, the FCC authorized voice service providers last month to automatically identify and block unwanted robocalls “based on reasonable call analytics, as long as their customers are informed and have the opportunity to opt out of the blocking.”
On June 27, the FTC held its fourth annual PrivacyCon, which hosted research presentations on a wide range of consumer privacy and security issues. Following opening remarks by FTC Chairman Joseph Simons, the one-day conference featured four plenary sessions covering a number of hot topics:
- Session 1: Privacy Policies, Disclosures, and Permissions. Five presenters discussed various aspects of privacy policies and notices to consumers. The panel discussed current trends showing that privacy notices to consumers have generally become lengthier in recent years, which helps cover the information regulators require, but often results in information overload for consumers more generally. One presenter advocated the concept of a condensed “nutrition label” for privacy, but acknowledged the challenge of distilling complicated activities into short bullets.
- Session 2: Consumer Preferences, Expectations, and Behaviors. This panel addressed research concerning consumer expectations and behaviors with regard to privacy. Among other anecdotal information, the presenters noted that many consumers are aware that personal data is tracked, but consumers are generally unaware of what data collectors ultimately do with the personal data once collected. To that end, one presenter advocated prescriptive limits on data collection in general, which would take the onus off consumers to protect themselves. Separately, with regard to the Children’s Online Privacy Protection Act (COPPA), one presenter noted that the law generally aligns with parents’ privacy expectations, but the implementing regulations and guidelines are too broad and leave too much room for implementation variations.
- Session 3: Tracking and Online Advertising. In the third session, five presenters covered various topics, including privacy implications of free versus paid-for applications to the impact of the EU’s General Data Protection Regulation (GDPR). According to the presenters, current research suggests that the measurable privacy benefits of paying for an app are “tenuous at best,” and consumers cannot be expected to make informed decisions because the necessary privacy information is not always available in the purchase program on a mobile device such as a phone. As for GDPR, the panel agreed that there are notable reductions in web use, with page views falling 9.7 percent in one study, although it is not clear whether such reduction is directly correlated to the May 25, 2018 effective date for enforcement of GDPR.
- Session 4: Vulnerabilities, Leaks, and Breach Notifications. In the final presentation, presenters discussed new research on how companies can mitigate data security vulnerabilities and improve remediation. One presenter discussed the need for proactive identification of vulnerabilities, noting that the goal should be to patch the real vulnerabilities and limit efforts related to vulnerabilities that are unlikely to be exploited. Another presenter analyzed data breach notifications to consumers, noting that all 50 states have data breach notification laws, but there is no consensus as to best practices related to the content or timing of notifications to consumers. The presenter concluded with recommendations for future notification regulations: (i) incorporate readability testing based on standardized methods; (ii) provide concrete guidelines of when customers need to be notified, what content needs to be included, and how the information should be presented; (iii) include visuals to highlight key information; and (iv) leverage the influence of templates, such as the model privacy form for the Gramm-Leach-Bliley Act.
On May 22, NYDFS announced its newly created Cybersecurity Division, led by Justin Herring as Executive Deputy Superintendent, that is, according to NYDFS, “the first of its kind to be established at a banking or insurance regulator.” The new division will focus on enforcing and issuing guidance on NYDFS’ cybersecurity regulation 23 NYCRR Part 500, advising on cybersecurity examinations, conducting cyber-related investigations, and disseminating information related to cyber-attack trends and threats. NYDFS highlighted Herring’s experience in supervising cybercrime and digital currency cases as Chief of the U.S. Attorney’s Office for the District of New Jersey Cyber Crimes Unit and a member of the Economic Crimes Unit, including investigating money laundering using digital currency and prosecuting unlicensed digital currency exchanges.
On March 15, the FTC released its annual report highlighting the agency’s privacy and data security work in 2018. Among other items, the report highlights consumer-related enforcement activities in 2018, including:
- an expanded settlement with a global ride-sharing company over allegations that the company violated the FTC Act by deceiving consumers regarding the company’s privacy and data practices (covered by InfoBytes here).
- a settlement with a global online payments system company to resolve allegations that its payment and social networking service failed to adequately disclose to consumers that transfers to external bank accounts were subject to review and that funds could be frozen or removed based on a review of the underlying transaction (covered by InfoBytes here).
- a settlement with a Texas-based company over allegations that it violated the FCRA by failing to take reasonable steps to ensure the accuracy of tenant-screening information furnished to landlords and property managers (covered by InfoBytes here).
The report also highlighted the FTC’s hearings on big data, privacy, and competition conducted through its Hearings on Competition and Consumer Protection in the 21st Century initiative. (Covered by InfoBytes here and here.)
On March 5, Attorneys General from all 50 states, as well as from the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands, sent a letter to the Senate Committee on Commerce, Science, and Transportation supporting a recently introduced bipartisan bill to combat illegal robocalls. Among other things, S. 151, the Telephone Robocall Abuse Criminal Enforcement and Deterrence Act (TRACED Act), would: (i) grant the FCC three years to take action against robocall violations, instead of the current one-year window; (ii) authorize the agency to issue penalties of up to $10,000 per robocall; and (iii) require service providers to implement the FCC’s new call authentication framework. The AGs state that they “are encouraged that the TRACED Act prioritizes timely, industrywide implementation of call authentication protocols,” and note their support for an interagency working group that the bill would establish consisting of members from the DOJ, FCC, FTC, CFPB, other relevant federal agencies, state AGs, and non-federal stakeholders.
- Jeffrey P. Naimon to discuss "Post-pandemic CFPB exam preparation" at the Mortgage Bankers Association Spring Conference & Expo
- Jonice Gray Tucker to discuss "Making fair lending work for you" at the Mortgage Bankers Association Spring Conference & Expo
- Jonice Gray Tucker to discuss "Reading the tea leaves of President Biden’s initial financial appointees" at LendIt Fintech
- APPROVED Webcast: Staying in the know with Buckley regtech solutions
- Moorari K. Shah to discuss “CA, NY, federal licensing and disclosure” at the Equipment Leasing & Finance Association Legal Forum
- Jonice Gray Tucker to discuss "Compliance under Biden" at the WSJ Risk & Compliance Forum
- Sherry-Maria Safchuk to discuss UDAAP at an American Bar Association webinar
- Jeffrey P. Naimon to discuss "What to expect: The new administration and regulatory changes" at the Mortgage Bankers Association Legal Issues and Regulatory Compliance Conference
- Jonice Gray Tucker to discuss “The future of fair lending” at the Mortgage Bankers Association Legal Issues and Regulatory Compliance Conference
- Steven R. vonBerg to discuss "LO comp challenges" at the Mortgage Bankers Association Legal Issues and Regulatory Compliance Conference
- Michelle L. Rogers to discuss "Major litigation" at the Mortgage Bankers Association Legal Issues and Regulatory Compliance Conference
- Michelle L. Rogers to discuss “The False Claims Act today” at the Federal Bar Association Qui Tam Section Roundtable