Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.
District Court denies certification and defendants’ motion for summary judgment in FDCPA class action
On January 26, the U.S. District Court for the Western District of Washington denied a plaintiff’s motion for class certification and denied motions for summary judgment from defendants in an FDCPA case stemming from a consent order between one of the defendants and the CFPB. As previously covered by InfoBytes, in September 2017, the CFPB announced it had filed a complaint in the U.S. District Court for the District of Delaware against a collection of 15 Delaware statutory trusts and their debt collector for, among other things, allegedly filing lawsuits against consumers for private student loan debt that they could not prove was owed or that was outside the applicable statute of limitations. According to the consent judgment, the trusts were required to pay at least $3.5 million in restitution to more than 2,000 consumers who made payments resulting from the improper collection suits, to pay $7.8 million in disgorgement to the Treasury Department, and to pay an additional $7.8 million civil money penalty to the CFPB. In addition, the trusts were required to: (i) hire an independent auditor, subject to the Bureau’s approval, to audit all 800,000 student loans in the portfolio to determine if collection efforts must be stopped on additional accounts; (ii) cease collection attempts on loans that lack proper documentation or that are time-barred; and (iii) ensure false or misleading documents are not filed and that documents requiring notarization are handled properly. A separate consent order issued against the debt collector orders the company to pay a $2.5 million civil money penalty to the CFPB.
According to the district court’s order, the plaintiffs, who were sued by the defendants for failing to pay their student loans, alleged that the defendants filed fraudulent, deceptive, and misleading affidavits in order to obtain default judgments. The plaintiffs sought to include a class of those residing in Washington for which the defendants sought to collect a debt allegedly owned by one of the trusts. The district court, however, was “unconvinced” that any of the questions would generate common answers on a class-wide basis. For example, the question of whether the defendants’ employees filed false or misleading affidavits “cannot be resolved in one stroke,” the district court said, because the plaintiffs “cannot show by a preponderance of the evidence that the documents Defendants used in every debt collection action suffered from the same alleged deficiencies.” With respect to the defendants’ summary judgment motion, the district court determined there were genuine issues of material fact regarding the alleged violations of the FDCPA and state law in Washington. The district court denied the defendants’ motion for summary judgment, noting noted that “[a]ttempts to collect debts with false affidavits and without the necessary documentation to prove the claims is unfair or unconscionable and involves false, deceptive, and/or misleading representations in violation of the FDCPA.”
On January 27, the FTC finalized an order with an education technology (ed tech) provider which claimed that the provider’s lax data security practices led to the exposure of millions of users and employees’ sensitive information, including Social Security numbers, email addresses, and passwords. As previously covered by InfoBytes, due to the company’s alleged failure to adequately protect the personal information collected from its users and employees, the company experienced four data breaches beginning in September 2017, when a phishing attack granted a hacker access to employees’ direct deposit information. Claiming violations of Section 5(a) of the FTC Act, the FTC alleged the company failed to implement basic security measures, stored personal data insecurely, and failed to implement a written security policy until January 2021, despite experiencing three phishing attacks.
Under the terms of the final decision and order, the company (who neither admitted nor denied any of the allegations) is required to take several measures to address the alleged conduct, including: (i) implementing a data retention and deletion process, which will allow users to request access to and deletion of their data; (ii) providing multi-factor authentication methods for users to secure their accounts; (iii) providing notice to affected individuals; (iv) implementing a comprehensive information security program; and (v) obtaining initial and biennial third-party information security assessments. The company must also submit covered incident reports to the FTC and is prohibited from making any misrepresentations relating to how it collects, maintains, uses, deletes, permits, or denies access to individuals’ covered information.
On January 27, the NCUA board unanimously voted to maintain the current temporary 18 percent interest rate ceiling for loans made by federal credit unions (FCUs) for another 18 months. The extension starts after the current period ends March 10. According to the announcement, the National Association of Federally-Insured Credit Unions (NAFCU) urged the NCUA to immediately raise the interest rate ceiling to 21 percent in order to help mitigate interest rate-related risks facing FCUs. Recognizing that the NAFCU “has consistently advocated for a floating permissible interest rate ceiling to address constraints of the 15 percent ceiling set by the FCU Act,” NCUA Chairman Todd Harper said the agency is conducting an analysis of a floating interest rate ceiling that should be completed by the April board meeting.
On January 27, the FDIC released a list of administrative enforcement actions taken against banks and individuals in December. The FDIC made public nine orders, including “one order to pay civil money penalty, two consent orders, one combined personal consent order and order to pay, two Section 19 orders, four prohibition orders, and seven orders of termination of insurance.”
The actions included a civil money order against a Georgia-based bank related to violations of the Flood Disaster Protection Act. The FDIC determined that the bank had engaged in a pattern or practice of violations because it “made, increased, extended, or renewed loans secured by a building or mobile home located in a special flood hazard area or to be located in a special flood hazard area without providing timely notice to the borrower and/or the servicer as to whether flood insurance was available for the collateral.”
Additionally, the FDIC issued a consent order against a Texas-based bank alleging the bank engaged in “unsafe or unsound banking practices or violations of law or regulation relating to, among other things, weaknesses in board and management oversight of the information technology function.” The bank neither admitted nor denied the allegations but agreed, among other things, that it would develop a staffing analysis plan “to ensure sufficient resources are available with the knowledge [and] prerequisite skills commensurate with the risk profile and complexity of the Bank’s information technology  function.”
On January 27, the Federal Reserve Board issued a policy statement providing guidelines on how the agency evaluates requests from supervised uninsured and insured banks seeking to engage in novel activities, such as those involving crypto assets. Recognizing that in recent years the Fed has received numerous inquiries, notifications, and proposals from banks seeking to engage in new or unprecedented activities, the Fed clarified that when evaluating such inquiries, uninsured and insured banks supervised by the Fed would be subject to the same limitations that are currently imposed on OCC-supervised national banks, including crypto-asset-related activities. According to a board memo published the same day, the Fed said it “will presumptively exercise its authority to limit state member banks to engaging as principal in only those activities that are permissible for national banks—in each case, subject to the terms, conditions, and limitations placed on national banks with respect to the activity—unless those activities are permissible for state banks by federal law.” This “equal treatment” is intended to “promote a level playing field and limit regulatory arbitrage,” the Fed said.
The Fed reiterated that banks must be able to ensure that any activities they plan to engage in are permitted by law and conducted in a safe and sound manner. A bank should implement risk management processes, internal controls, and information systems that are “appropriate and adequate for the nature, scope, and risks of its activities,” the Fed noted. The Fed, however, explained that the policy statement does “not prohibit a state member bank, or prospective applicant, from providing safekeeping services, in a custodial capacity, for crypto-assets if conducted in a safe and sound manner and in compliance with consumer, anti-money laundering, and anti-terrorist financing laws.”
The policy statement was issued the same day the Fed denied a request from a Wyoming-based digital asset firm to become a member of the Federal Reserve System. The Fed explained that the firm—a special purpose depository institution chartered by the state of Wyoming that “proposed to engage in novel and untested crypto activities that include issuing a crypto asset on open, public and/or decentralized networks…“ presented significant safety and soundness risks.” Additionally, the Fed determined that the digital asset firm’s risk management framework failed to sufficiently address heighted risk concerns, including its ability to mitigate money laundering and terrorism financing risks.
On January 26, the California Department of Financial Protection and Innovation (DFPI) announced that it entered into a $22.5 million settlement agreement with a Cayman Islands digital asset firm to resolve a securities enforcement action regarding its interest-bearing virtual currency account. As previously covered by InfoBytes, in September 2022, the New York attorney general sued the firm for allegedly offering unregistered securities and defrauding investors. A North American Securities Administrators Association working group—composed of the DFPI and state regulators from Washington, Kentucky, New York, Oklahoma, Indiana, Maryland, South Carolina, Vermont, and Wisconsin—collaborated in the investigation into the firm. The states alleged that the platform failed to register as a securities and commodities broker but told investors that it was fully in compliance. According to the New York AG’s complaint, the platform promoted and sold securities through an interest-bearing virtual currency account that promised high returns for participating investors. The New York AG said that a cease-and-desist letter was sent to the platform in October 2021, and that while the platform stated it was “working diligently to terminate all services” in the state, it continued to handle more than 5,000 accounts as of July. The complaint charges the platform with violating New York’s Martin Act and New York Executive Law § 63(12), and seeks restitution, disgorgement of profits, and a permanent injunction. The announcement also noted the SEC entered into a separate settlement with the firm for the same penalty amount, alleging that it to register the offer and sale of its retail crypto-asset lending product (covered by InfoBytes here).
On January 26, the National Institute of Standards and Technology (NIST) released voluntary guidance to help organizations that design, deploy, or use artificial intelligence (AI) systems mitigate risk. The Artificial Intelligence Risk Management Framework (developed in close collaboration with the private and public sectors pursuant to a Congressional directive under the National Defense Authorization for Fiscal Year 2021), “provides a flexible, structured and measurable process that will enable organizations to address AI risks,” NIST explained. The framework breaks down the process into four high-level functions: govern, map, measure, and manage. These categories, among other things, (i) provide guidance on how to evaluate AI for legal and regulatory compliance and ensure policies, processes, procedures and practices are transparent, robust, and effective; (ii) outline processes for addressing AI risks and benefits arising from third-party software and data; (iii) describe the mapping process for collecting information to establish the context to frame AI-related risks; (iv) provide guidance for employing and measuring “quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts”; and (v) set forth a proposed process for managing and allocating risk management resources. Examples are also provided within the framework to help organizations implement the guidance.
“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” Deputy Commerce Secretary Don Graves said in the announcement. “It should accelerate AI innovation and growth while advancing—rather than restricting or damaging—civil rights, civil liberties and equity for all.”
On January 27, the California attorney general announced an investigation into mobile applications’ compliance with the California Consumer Privacy Act (CCPA). The AG sent letters to businesses in the retail, travel, and food service industries who maintain popular mobile apps that allegedly fail to comply with consumer opt-out requests or do not offer mechanisms for consumers to delete personal information or stop the sale of their data. The investigation also focuses on businesses that fail to process consumer opt-out and data-deletion requests submitted through an authorized agent, as required under the CCPA. “On this Data Privacy Day and every day, businesses must honor Californians’ right to opt out and delete personal information, including when those requests are made through an authorized agent,” the AG said, adding that authorized agent requests include “those sent by Permission Slip, a mobile application developed by Consumer Reports that allows consumers to send requests to opt out and delete their personal information.” The AG encouraged the tech industry to develop and adopt user-enabled global privacy controls for mobile operating systems to enable consumers to stop apps from selling their data.
As previously covered by InfoBytes, the CCPA was enacted in 2018 and took effect January 1, 2020. The California Privacy Protection Agency is currently working on draft regulations to implement the California Privacy Rights Act, which largely became effective January 1, to amend and build upon the CCPA. (Covered by InfoBytes here.)
On January 25, FinCEN's acting Deputy Director, Jimmy Kirby, spoke before the Identity Policy Forum regarding digital identity threats, stating that FinCEN is “pragmatically focused” on protecting the U.S. financial system from illicit finance threats. According to Kirby, financial institutions must establish with confidence who their customers are on the front end and throughout the customer relationship. He noted that a failure or security compromise in any step of that process compromises the integrity of customer identity. Kirby also pointed out that security breaches have led to data hacks of centralized repositories of identity-related information, exposing personally identifiable information, and making those data sources less reliable, and that identity-related suspicious activity reports are increasing. Observing such threats, Kirby said that FinCEN designed the Identity Project to achieve three goals, to: (i) learn about financial institutions’ customer identification processes; (ii) quantify process breakdowns, vulnerabilities, and threats; and (iii) identify solutions, including digital identity. Kirby also discussed responsible innovation and emphasized the need to “foster development of infrastructure, information sharing, and standards that will safeguard the future of identity and the financial system.” Regarding expanding partnerships and feedback loops, Kirby said that the public sector must learn from each other, and that FinCEN is “also engaging with other domestic Federal agencies and regulators on digital identity, at FedID and throughout the year.”
OFAC announces human rights abuse sanctions against Paraguay’s former president and current vice president
On January 26, the U.S. Treasury Department’s Office of Foreign Assets Control (OFAC) announced sanctions against the former president of Paraguay and the current vice president for their involvement in corrupt actions that undermine democratic institutions in the country. OFAC also designated several companies that are owned or controlled by the former president. According to OFAC, the sanctioned persons are being designated as perpetrators of serious human rights abuse and corruption pursuant to Executive Order 13818, which builds up and implements the Global Magnitsky Human Rights Accountability Act. As a result of the sanctions, all property and interests in property belonging to the sanctioned persons subject to U.S. jurisdiction are blocked and must be reported to OFAC. Additionally, “any entities that are owned, directly or indirectly, individually or in the aggregate, 50 percent or more by one or more blocked persons are also blocked.” OFAC noted that its regulations generally prohibit U.S. persons from participating in transactions with these persons, which “include the making of any contribution or provision of funds, goods, or services by, to, or for the benefit of any designated person, or the receipt of any contribution or provision of funds, goods, or services from any such person.”