Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
FTC says COPPA does not preempt state privacy claims
The FTC recently filed an amicus brief in a case on appeal before the U.S. Court of Appeals for the Ninth Circuit, arguing that the Children’s Online Privacy Protection Act (COPPA) does not preempt state laws that are consistent with the federal statute’s treatment of regulated activities. The full 9th Circuit is currently reviewing a case brought against a multinational technology company accused of using persistent identifiers to collect children’s data and track their online behavior surreptitiously and without their consent in violation of COPPA and various state laws.
As previously covered by InfoBytes, last December the 9th Circuit reversed and remanded a district court’s decision to dismiss the suit after reviewing whether COPPA preempts state law claims based on underlying conduct that also violates COPPA’s regulation. At the time, the 9th Circuit examined the language of COPPA’s preemption clause, which states that state and local governments cannot impose liability for interstate commercial activities that is “inconsistent with the treatment of those activities or actions” under COPPA. The opinion noted that the 9th Circuit has long held “that a state law damages remedy for conduct already proscribed by federal regulations is not preempted,” and that the statutory term “inconsistent” in the preemption context refers to contradictory state law requirements, or to requirements that stand as obstacles to federal objectives. The opinion further stated that because “the bar on ‘inconsistent’ state laws implicitly preserves ‘consistent’ state substantive laws, it would be nonsensical to assume Congress intended to simultaneously preclude all state remedies for violations of those laws.” As such, the appellate court held that “COPPA’s preemption clause does not bar state-law causes of action that are parallel to, or proscribe the same conduct forbidden by, COPPA. Express preemption therefore does not apply to the children’s claims.” The defendant asked the full 9th Circuit to review the ruling. The appellate court in turn asked the FTC for its views on the COPPA preemption issue, specifically with respect to “whether the [COPPA] preemption clause preempts fully stand-alone state-law causes of action by private citizens that concern data-collection activities that also violate COPPA but are not predicated on a claim under COPPA.”
In agreeing with the 9th Circuit that plaintiffs’ claims are not preempted in this case, the FTC argued that nothing in COPPA’s text, purpose, or legislative history supports the sweeping preemption that the defendant claimed. According to the defendant, plaintiffs’ state law claims are inconsistent with COPPA and are therefore preempted “because the claims were brought by plaintiffs who were not authorized to directly enforce COPPA, and would result in monetary remedies under state law that COPPA did not make available through direct enforcement.” Moreover, all state law claims relating to children’s online privacy are inconsistent with COPPA’s framework, including those brought by state enforcers, the defendant maintained. The FTC disagreed, writing that the 9th Circuit properly rejected defendant’s interpretation, which would preempt a wide swath of traditional state laws. Moreover, COPPA’s preemption clause only applies to state laws that are “inconsistent” with COPPA so as not to create “field preemption,” the FTC said, adding that plaintiffs’ claims in this case are consistent with the statute.
FTC, DOJ sue maker of health app over data sharing
On May 17, the DOJ filed a complaint on behalf of the FTC against a health app for violating the Health Breach Notification Rule (HBNR) by allegedly sharing users’ sensitive personal information with third parties, disclosing sensitive health data, and failing to notify users of these unauthorized disclosures. According to the complaint, users were allegedly repeatedly and falsely promised via privacy policies that their health information would not be shared with third parties without the user’s knowledge or consent, and that any collected data was non-identifiable and only used for the defendant’s own analytics or advertising. The FTC charged the defendant with failing to implement reasonable measures to address the privacy and data security risks created by its use of third-party automated tracking tools and for sharing health information used for advertising purposes without obtaining users’ affirmative express consent. Under the HBNR, companies with access to personal health records are required to notify users, the FTC, and media outlets in certain situations, if there has been an unauthorized acquisition of unsecured personal health information. The defendant also allegedly failed to impose limits on how third parties could use the data and failed to adequately encrypt data shared with third parties, thus subjecting the data to potential interception and/or seizure by bad actors.
The proposed court order would require the defendant to pay a $100,000 civil penalty, and would permanently prohibit the company from sharing personal health data with third parties for advertising and from making future misrepresentations about its privacy practices. The defendant would also be required to (i) obtain user consent before sharing personal health data; (ii) limit data retention; (iii) request deletion of data shared with third parties; (iv) provide notices to users explaining the FTC’s allegations and the proposed settlement; and (v) implement comprehensive security and privacy programs to protect consumer data. The defendant has also agreed to pay a total of $100,000 to Connecticut, the District of Columbia, and Oregon (who collaborated with the FTC on the action) for violating state privacy laws with respect to its data sharing and privacy practices.
FTC proposes changes to Health Breach Notification Rule
On May 18, the FTC issued a notice of proposed rulemaking (NPRM) and request for public comment on changes to its Health Breach Notification Rule (Rule), following a notice issued last September (covered by InfoBytes here) warning health apps and connected devices collecting or using consumers’ health information that they must comply with the Rule and notify consumers and others if a consumer’s health data is breached. The Rule also ensures that entities not covered by HIPPA are held accountable in the event of a security breach. The NPRM proposed several changes to the Rule, including modifying the definition of “[personal health records (PHR)] identifiable health information,” clarifying that a “breach of security” would include the unauthorized acquisition of identifiable health information, and specifying that “only entities that access or send unsecured PHR identifiable health information to a personal health record—rather than entities that access or send any information to a personal health record—qualify as PHR related entities.” The modifications would also authorize the expanded use of email and other electronic methods for providing notice of a breach to consumers and would expand the required content for notices “to include information about the potential harm stemming from the breach and the names of any third parties who might have acquired any unsecured personally identifiable health information.” Comments on the NPRM are due 60 days after publication in the Federal Register.
The same day, the FTC also issued a policy statement warning businesses against making misleading claims about the accuracy or efficacy of biometric technologies like facial recognition. The FTC emphasized that the increased use of consumers’ biometric information and biometric information technologies (including those powered by machine learning) raises significant consumer privacy and data security concerns and increases the potential for bias and discrimination. The FTC stressed that it intends to combat unfair or deceptive acts and practices related to these issues and outlined several factors used to determine potential violations of the FTC Act.
FTC obtains TROs to halt student loan debt relief schemes
On May 8, the FTC announced that the U.S. District Court for the Central District of California recently issued temporary restraining orders (TROs) against two student loan debt relief companies that allegedly tricked consumers into paying for nonexistent repayment and loan forgiveness programs. According to the complaints (see here and here), the defendants allegedly made deceptive claims in order to lure low-income consumers into paying hundreds to thousands of dollars in illegal upfront fees as part of a purported plan to pay down their student loans. The defendants allegedly made consumers believe that they were enrolled in a legitimate loan repayment program, that their loans would be forgiven in whole or in part, and that most or all of their payments would be applied to their loan balances. The FTC alleges that, in reality, the defendants pocketed the borrowers’ payments. The FTC also charged the defendants with falsely claiming to be or be affiliated with the Department of Education and stating that they were purchasing borrowers’ debt from federal student loan servicers in order to secure debt relief on their behalf. When consumers realized the debt relief program did not exist, the defendants allegedly often refused to provide refunds.
According to the FTC, these deceptive misrepresentations violated Section 5 of the FTC Act and the Telemarketing Sales Rule (TSR). The FTC also alleges that the companies violated the Gramm-Leach-Bliley Act (GLBA), by using deceptive tactics to obtain consumers’ financial information, and the TSR, by calling numbers listed on the National Do Not Call Registry and by failing to pay required Do Not Call Registry fees for access. In issuing the TROs (see here and here), which temporarily halt the two schemes and freeze the defendants’ assets, the court noted that, upon “[w]eighing the equities and considering the FTC’s likelihood of ultimate success on the merits,” there is good cause to believe that immediate and irreparable harm will occur as a result of the defendants’ ongoing violations of the FTC Act, the TSR, and the GLBA, unless the defendants are restrained and enjoined.
District Court dismisses FTC’s privacy claims in geolocation action
On May 4, the U.S. District Court for the District of Ohio issued two separate rulings in a pair of related disputes between the FTC and a data broker. The disputes center around accusations made by the FTC last August that the data broker violated Section 5 of the FTC Act by unfairly selling precise geolocation data from hundreds of millions of mobile devices which can be used to trace individuals’ movements to and from sensitive locations (covered by InfoBytes here). The FTC sought a permanent injunction to stop the data broker’s practices, as well as additional relief. The data broker, upon learning that the FTC planned to filed a lawsuit against it, filed a preemptive lawsuit challenging the agency’s authority.
The court first dismissed the data broker’s preemptive bid to block the FTC’s enforcement action, ruling that the data broker has not identified any “viable cause of action” to support its request for injunctive relief. The court explained that injunctive relief is a “drastic remedy” that is only available if no other legal remedy is available. However, the data broker possesses an “adequate remedy at law,” the court said, “because it can seek dismissal of, and otherwise directly defend against, the FTC’s enforcement action.”
With respect to the FTC’s action, the court granted the data broker’s motion to dismiss the FTC’s complaint, but gave the agency leave to amend. The court agreed with the data broker that the FTC’s complaint lacks sufficient allegations to support its unfairness claim under Section 5 of the FTC Act. While the court disagreed with the data broker’s assertion that it did not have “fair notice that its sale of geolocation data without restrictions near sensitive locations could violate Section 5(a) of the FTC Act” or that the FTC had to allege a predicate violation of law or policy to state a claim, the court determined that the FTC failed to adequately allege that the data broker’s practices created “a ‘significant risk’ of concrete harm.” Moreover, the court found that “the purported privacy intrusion is not severe enough to constitute ‘substantial injury’ under Section 5(n).” The court noted, however that some of the deficiencies may be cured through additional factual allegations in an amended complaint.
House committee continues federal privacy legislation discussions
On April 27, the House Subcommittee on Innovation, Data, and Commerce, a subcommittee of the House Energy and Commerce Committee, held a hearing entitled “Addressing America’s Data Privacy Shortfalls: How a National Standard Fills Gaps to Protect Americans’ Personal Information” to continue discussions on the need for comprehensive federal privacy legislation. Subcommittee Chair Gus Bilirakis (R-FL) delivered opening remarks, commenting that the Committee has examined in depth how a federal privacy law is needed to protect Americans and balance the needs of business, government and civil society, what happens when malicious actors exploit access to data, where the FTC’s jurisdictional lines and authority lay and how that interplays with a comprehensive federal privacy law, and the role of data brokers and the lack of protections given to consumers to manage their data.
During the hearing, subcommittee members commented that one of the big debates about the American Data Privacy and Protection Act (ADPPA) as it came out of committee last year was the degree to which it should preempt state laws. There was push back on the bill from former Speaker Nancy Pelosi who was against the proposed preemption measures, as well as from the California attorney general and the California Privacy Protection Agency who expressed similar concerns and asked Congress to “allow states to provide additional protections in response to changing technology and data privacy protection practices.” The ADPPA was advanced through the committee last July by a vote of 53-2 (covered by InfoBytes here) and was sent to the House floor during the last Congressional session but never came up for a full chamber vote. The bill has not been reintroduced yet.
Subcommittee members said that while drafting a comprehensive national data privacy law is a priority, there are a lot of concerns over preemption of state laws. Certain Republican members also commented that it is very important for Congress to create a single national standard before the FTC proposes data privacy rules from its commercial surveillance rulemaking efforts. As previously covered by InfoBytes, FTC Chair Lina M. Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya testified before the same committee in April, during which time they said they are currently reviewing comments on the proposed rulemaking but support federal privacy legislation.
While the ADPPA has not yet been reintroduced, House Financial Services Committee Chairman Patrick McHenry (R-NC) introduced the Data Privacy Act of 2023 (see H.R. 1165) earlier this year, which would, among other things, modernize the Gramm-Leach-Bliley Act to better align the statute with the evolving technological landscape and ensure consumers understand how their data is being collected and used and grant consumers power to opt-out of the collection of their data and request that their data be deleted at any time.
FTC obtains permanent ban against debt relief operators
On May 1, three individuals accused of allegedly participating in a credit card debt relief scheme agreed to court orders permanently banning them from telemarketing and selling debt relief products and services. As previously covered by InfoBytes, last November the FTC filed a lawsuit claiming the defendants and their affiliated companies violated the FTC Act and the Telemarketing Sales Rule by using telemarketers to pitch their deceptive scheme, in which they falsely claimed to be affiliated with a particular credit card association, bank, or credit reporting agency, and promised they could improve consumers’ credit scores after 12 to 18 months. The defendants also allegedly misrepresented that the upfront fee, which in some cases was as high as $18,000, was charged to consumers’ credit cards as part of the overall debt that would be eliminated, and therefore would not actually have to be paid. Without admitting or denying the allegations, the defendants agreed to the court orders (available here, here, and here) imposing numerous conditions, including (i) a permanent ban on advertising, selling, or assisting in any debt relief product or service or participating in telemarketing; (ii) a broad prohibition forbidding defendants from deceiving consumers about any other products or services they sell or market; and (iii) the surrender of certain property interests and assets that will be used to provide restitution to affected consumers. The orders impose a total monetary judgment of approximately $17.5 million, for which each defendant is jointly and severally liable, to be satisfied by defendants’ surrender of certain assets and subject to a partial suspension of the remainder of the judgment pursuant to defendants’ truthfulness regarding their financial status and ability to pay.
Federal agencies reaffirm commitment to confront AI-based discrimination
On April 25, the CFPB, DOJ, FTC, and Equal Employment Opportunity Commission released a joint statement reaffirming their commitment to protect the public from bias in automated systems and artificial intelligence (AI). “America’s commitment to the core principles of fairness, equality, and justice are deeply embedded in the federal laws that our agencies enforce to protect civil rights, fair competition, consumer protection, and equal opportunity,” the agencies said, emphasizing that existing authorities apply equally to the use of new technologies and responsible innovation as they do to any other conduct. The agencies have previously expressed concerns about potentially harmful AI applications, including black box algorithms, algorithmic marketing and advertising, abusive AI technology usage, digital redlining, and repeat offenders’ use of AI, which may contribute to unlawful discrimination, biases, and violate consumers’ rights.
“We already see how AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” FTC Chair Lina M. Khan said. “Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition,” Khan added.
CFPB Director Rohit Chopra echoed Khan’s sentiments and said the Bureau, along with other agencies, are taking measures to address unchecked AI. “While machines crunching numbers might seem capable of taking human bias out of the equation, that’s not what is happening,” Chopra said. “When consumers and regulators do not know how decisions are made by artificial intelligence, consumers are unable to participate in a fair and competitive market free from bias,” Chopra added. The Director’s statements concluded by noting that the Bureau will continue to collaborate with other agencies to enforce federal consumer financial protection laws, regardless of whether the violations occur through traditional means or advanced technologies.
Additionally, Assistant Attorney General Kristen Clarke of the DOJ’s Civil Rights Division noted that “[a]s social media platforms, banks, landlords, employers and other businesses  choose to rely on artificial intelligence, algorithms and other data tools to automate decision-making and to conduct business, we stand ready to hold accountable those entities that fail to address the discriminatory outcomes that too often result.”
FTC testifies on privacy efforts
On April 18, FTC Chair Lina M. Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya testified before the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce on the agency’s efforts to protect consumers from unfair or deceptive practices and unfair methods of competition. The hearing addressed the agency’s 2024 budget request, as well as topics focused on rulemaking authority, junk fees, robocalls, fraud, and privacy initiatives, among others. House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA) delivered opening remarks, during which she cited the resignation of both Republican commissioners and criticized the agency’s “abuses of power.”
In a prepared statement, the commissioners provided an overview of the agency’s consumer protection work, including its initiatives to safeguard consumers’ privacy that take a multi-pronged approach focusing on health data, children and teens, and data security. The commissioners broadly discussed recent enforcement actions taken to protect sensitive health data and commented on FTC efforts to use the agency’s rulemaking authority to protect children in the marketplace (the FTC is currently reviewing the Children’s Online Privacy Protection Act Rule to determine any necessary changes and is exploring how commercial surveillance may be fueling manipulative advertising practices targeted towards children and teens). They also flagged a recent data security action as an example of how the agency “is pivoting toward requiring restrictions on what data firms can collect and retain.” According to the testimony, the FTC engaged in 35 investigations, cases, and enforcement projects with foreign consumer, privacy, and criminal enforcement agencies during the last fiscal year. The commissioners also said the agency is currently reviewing comments received on a 2022 advance notice of proposed rulemaking (covered by InfoBytes here), which sought feedback on the widespread collection of consumers’ personal information as well as concerns relating to consumer data security and commercial surveillance. While the commissioners reiterated the agency’s strong support for federal privacy legislation, Chair Rodgers said the FTC voted on partisan lines “to act unilaterally” on its own set of rules.
FTC, DOJ sue payment processor for tech support scams
On April 17, the DOJ filed a complaint on behalf of the FTC against several corporate and individual defendants for violating the FTC Act and the Telemarketing Sales Rule (TSR) by allegedly engaging in credit card laundering for tech support scams. (See also FTC press release here.) According to the complaint, since at least 2016, the defendants—a payment processing company and several of its subsidiaries, along with the company’s CEO and chief strategy officer—worked with telemarketers who made misrepresentations to consumers about the performance and security of their computers through the use of deceptive pop ups in order to sell technical support scams. Defendants’ involvement included assisting and facilitating the illegal sales and laundering the credit card charges through their own merchant accounts (thus giving the scammers access to the U.S. credit card network) where defendants received a commission for each charge. The complaint maintained that the defendants “engaged in this activity even though it and its officers knew or consciously avoided knowing that its tech support clients were engaged in deceptive telemarketing practices.”
The proposed court orders (see here, here, and here) each impose monetary judgments of $16.5 million and (i) prohibit the defendants from engaging in credit card laundering through merchant accounts; (ii) require the defendants to screen and monitor any high-risk clients and take action if clients should charge consumers without authorization or violate the TSR; and (iii) prohibit the defendants from engaging in payment processing or assisting tech support companies that engage in false or unsubstantiated telemarketing or advertising. According to the DOJ’s announcement the defendants will be required to pay a combined total of $650,000 in consumer redress. This payment will result in the suspension of the total monetary judgment of $49.5 million due to the defendants’ inability to pay.