Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • CPPA releases latest draft of automated decision-making technology regulation

    State Issues

    The California Privacy Protection Agency (CPPA) released an updated draft of its proposed enforcement regulations for automated decisionmaking technology in connection with its March 8 board meeting. The draft regulations included new definitions, including “automated decisionmaking technology” which means “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking,” which expands its scope from its previous September update (covered by InfoBytes here).

    Among other things, the draft regulations would require businesses that use automated decisionmaking technology to provide consumers with a “Pre-use Notice” to inform consumers on (i) the business’s use of the technology; (ii) their right to opt-out of the business’s use of the automated decisionmaking technology and how they can submit such a request (unless exempt); (iii) a description of their right to access information; and (iv) a description of how the automated decisionmaking technology works, including its intended content and recommendations and how the business plans to use the output. The draft regulations detailed further requirements for the opt-out process.

    The draft regulations also included a new article, entitled “risk assessments,” which provided requirements as to when a business must conduct certain assessments and requirements that process personal information to train automated decisionmaking technology or artificial intelligence. Under the proposed regulations, every business which processes consumers’ personal information may present significant risk to consumers’ privacy and must conduct a risk assessment before initiating that processing. If a business previously conducted a risk assessment for a processing activity in compliance with the article and submitted an abridged risk assessment to the CPPA, and there were no changes, the business is not required to submit an updated risk assessment. The business must, however, submit a certification of compliance to the CPPA.

    The CPPA has not yet started the formal rulemaking process for these regulations and the drafts are provided to facilitate board discussion and public participation, and are subject to change. 

    State Issues Privacy Agency Rule-Making & Guidance California CPPA Artificial Intelligence

  • FTC updates the Telemarketing Sales Rule, proposes tech support rule

    Agency Rule-Making & Guidance

    On March 7, the FTC announced updates to the Telemarketing Sales Rule (TSR) to extend fraud protections to businesses and modernize recordkeeping requirements in response to technological advancements. These updates were part of an ongoing review of the TSR, which governs telemarketing practices and includes the Do Not Call Registry (DNC) and issued rules against telemarketing robocalls.

    The newly finalized rule broadened the scope of prohibited deceptive and abusive telemarketing practices to include business-to-business calls, which were previously exempt, except in specific cases. The rule also revised the TSR's recordkeeping requirements to reflect changes in technology and telemarketing methods, which included maintaining detailed call records and consent documentation, as well as compliance with the DNC Registry.

    In addition to these updates, the FTC proposed a rule that would enhance its ability to tackle tech support scams by extending the TSR's coverage to include inbound telemarketing calls for technical support services. This amendment addressed deceptive tech support schemes and would empower the FTC to seek stronger legal remedies such as civil penalties and consumer compensation. The Commission invited public feedback on a proposed definition of tech support scams.

    Agency Rule-Making & Guidance Federal Issues FTC TSR Artificial Intelligence

  • U.S. Attorney General taps professor to lead new technology-focused roles

    Fintech

    On February 22, the U.S. Attorney General, Merrick B. Garland, announced that he tapped Jonathan Mayer to head the DOJ’s first Chief Science and Technology Advisory and Chief Artificial Intelligence (AI) Officer roles. The roles are housed in the DOJ’s Office of Legal Policy which is developing a team of technical and policy experts in technology-related areas important to the Department’s responsibilities. These topics include cybersecurity and AI with the aim to advise leadership and collaborate with other components across the Department and with federal partners on cutting-edge technological issues. As the first Chief Science and Technology Advisor, Mayer will contribute technical expertise on cybersecurity, AI, and emergent technology matters.

    The Chief AI Officer role was created pursuant to a presidential executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In this role, Mayer will work on intra-departmental and cross-agency efforts on AI and adjacent issues, and he will also lead the Justice Department’s newly established Emerging Technology Board, which coordinates and governs AI and other emerging technologies across the Department.

    Mayer has a PhD in computer science from Stanford University and a J.D. from Stanford Law School. Mayer is an assistant professor at Princeton University’s Department of Computer Science and School of Public and International Affairs where his research is focused on the intersection of technology, policy, and law with an emphasis in criminal procedure, national security, and consumer protection.

    Fintech Department of Justice Artificial Intelligence

  • FCC’s Rosenworcel relaunches Consumer Advisory Committee; focuses on AI consumer issues

    Privacy, Cyber Risk & Data Security

    On February 20, the Chairwoman of the FCC, Jessica Rosenworcel, announced that the FCC will relaunch the Consumer Advisory Committee (CAC). The CAC will focus on how emerging artificial intelligence (AI) technologies implicate consumers’ privacies and protections, such as how the FCC can better protect consumers against “unwanted and illegal” calls, among other things. The CAC is a committee with 28 members comprising companies, non-profit entities, trade organizations, and individuals; a full list of members is found here. The first meeting is on April 4, at 10:30 a.m., Eastern Time, and will be open to the public via a live broadcast.

    Privacy, Cyber Risk & Data Security FCC Advisory Committee Artificial Intelligence Privacy

  • FTC proposes two actions to combat AI impersonation fraud

    Agency Rule-Making & Guidance

    On February 15, the FTC announced its supplemental notice of proposed rulemaking relating to the protection of consumers from impersonation fraud, especially from any impersonations of government entities. The first action from the FTC was a final rule that prohibited the impersonation of government, business, and their officials or agents in interstate commerce. The second action was a notice seeking public comment on a supplemental proposed rulemaking that would revise the first action and add a prohibition on, and penalties for, the impersonation of individuals for entities who provide goods and services (with the knowledge or reason to know that those goods or services will be used in impersonations) that are unlawful. In tandem, these actions sought to prohibit the impersonation of government and business officials.

    The FTC notes that these two actions come from “surging complaints” on impersonation fraud, specifically from artificial intelligence-generated deep fakes. The final rule will expand the remedies and provide monetary relief, whereas the FTC stated this rule will provide a “shorter, faster and more efficient path” for injured consumers to recover money. The rule would enable the FTC to seek monetary relief from scammers that use government seals or business logos, spoof government and business emails, and impersonate a government official or falsely imply a business affiliation.

    Agency Rule-Making & Guidance FTC Artificial Intelligence Fraud NPR

  • FCC ruling determines AI calls are subject to TCPA regulations

    Federal Issues

    On February 8, the FCC announced the unanimous adoption of a declaratory ruling that recognizes calls made with AI-generated voices are “artificial” under the Telephone Consumer Protection Act (TCPA). The declaratory ruling notes that the TCPA prohibits initiating “any telephone call to any residential telephone line using an artificial or prerecorded voice to deliver a message without the prior express consent of the called party” unless certain exceptions apply. The TCPA also prohibited “any non-emergency call made using an automatic telephone dialing system or an artificial or prerecorded voice to certain specified categories of telephone numbers including emergency lines and wireless numbers.”

    The ruling, effective immediately, deemed voice cloning and similar AI technologies to be artificial voice messages under the TCPA, subject to its regulations. Therefore, prior express consent from the called party is required before making such calls. Additionally, callers using AI technology must provide identification and disclosure information and offer opt-out methods for telemarketing calls.

    This ruling provided State Attorneys General nationwide with additional resources to pursue perpetrators responsible for these robocalls. This action followed the Commission’s November proposed inquiry for how AI could impact unwanted robocalls and texts (announcement covered by InfoBytes here).

    Federal Issues Agency Rule-Making & Guidance Artificial Intelligence FCC TCPA Consumer Protection

  • SEC Chair Gensler weighs in on AI risks and SEC’s positioning

    Privacy, Cyber Risk & Data Security

    On February 13, SEC Chair Gary Gensler delivered a speech, “AI, Finance, Movies, and the Law” before the Yale Law School. In his speech, Gensler spoke on the crossovers between artificial intelligence (AI) and finance, system-wide risks on a macro-scale, AI offering deception, AI washing, and hallucinations, among other topics.

    Gensler discussed the benefits of using AI in finance, including greater financial inclusion and efficiencies. However, he highlighted that the use of AI amplifies many issues, noting how AI models can be flawed in making decisions, propagating biases, and offering predictions. On a system-wide level, Gensler opined how policy decisions will require new thinking to overcome the challenges to financial stability that AI could create.  Gensler addressed AI washing, stating that it may violate securities laws, emphasizing that any disclosures regarding AI by SEC registrants should still follow the “basics of good securities lawyering” for disclosing material risks, defining the risk carefully, and avoiding disclosures that could mislead the public regarding the use of an AI model. Lastly, Gensler warned about AI hallucinations, saying that advisors or brokers are not supposed to give investment advice based on inaccurate information, closing with “You don’t want your broker or advisor recommending investments they hallucinated while on mushrooms.”

    Privacy, Cyber Risk & Data Security Artificial Intelligence Securities Exchange Act Securities AI

  • FCC Chairwoman proposes making all AI-generated robocalls “illegal” to help State Attorneys General

    Agency Rule-Making & Guidance

    On January 31, FCC Chairwoman, Jessica Rosenworcel, released a statement proposing that the FCC “recognize calls made with AI-generated voices are ‘artificial’ voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocalls scams targeting consumers illegal.” Specifically, the FCC’s proposal would make voice cloning technology used in robocall scams illegal, which has been used to impersonate celebrities, political candidates, and even close family members. Chairwoman Rosenworcel stated, “No matter what celebrity or politician you favor… it is possible we could all be a target of these faked calls… That’s why the FCC is taking steps to recognize this emerging technology as illegal… giving our partners at State Attorneys General offices… new tools they can use to crack down on these scams and protect customers.”

    This action comes after the FCC released a Notice of Inquiry last month where the FCC received comments from 26 State Attorneys General to understand how the FCC can better protect consumers from AI-generated telemarking, as covered by InfoBytes here. This is not the first time the FCC has targeted robocallers: as previously covered by InfoBytes in October 2023, the FCC proposed an inquiry into how AI is used to create unwanted robocalls and texts; in September 2023, the FCC updated its rules to curb robocalls under the Voice over Internet Protocol, covered here.

    Agency Rule-Making & Guidance FCC TCPA Artificial Intelligence Robocalls State Attorney General

  • White House provides three-month update on its AI executive order

    Federal Issues

    On January 29, President Biden released a statement detailing how federal agencies have fared in complying with Executive Order 14110 regarding artificial intelligence (AI) development and safety. As previously covered by InfoBytes, President Biden’s Executive Order from October 30, 2023, outlined how the federal government can promote AI safely and in a secure way to protect U.S. citizens’ rights.

    The statement notes that federal agencies have (i) used the Defense Production Act to have AI developers report vital information to the Department of Commerce; (ii) proposed a draft rule for U.S. cloud companies to provide computing power for foreign AI training, and (iii) completed risk assessments for “vital” aspects of society. The statement further outlines how the NSF (iv) managed a pilot program to ensure that AI resources are equitably accessible to the research and education communities; (v) began the EducateAI initiative to create AI educational opportunities in K-12 through undergraduate institutions; (vi) promoted the funding of a new Regional Innovation Engines to assist in creating breakthrough clinical therapies; (vii) the OPM launched the Tech Talent Task Force to accelerate hiring data scientists in the government, and (viii) the DHHS established an AI Task Force to provide “regulatory clarity” in health care. Lastly, the statement provides additional information on various agency activities that have been completed in response to the Executive Order. More on this can be found at ai.gov.

    Federal Issues Biden White House Artificial Intelligence Executive Order

  • Securities regulators issue guidance and an RFC on AI trading scams

    Financial Crimes

    On January 25, FINRA and the CFTC released advisory guidance on artificial intelligence (AI) fraud, with the latter putting out a formal request for comment. FINRA released an advisory titled “Artificial Intelligence (AI) and Investment Fraud” to make investors aware of the growing popularity of scammers committing investment fraud using AI and other emerging technologies, posting the popular scam tactics, and then offering protective steps. The CFTC released a customer advisory called “AI Won’t Turn Trading Bots into Money Machines,” which focused on trading platforms that claim AI-created algorithms can guarantee huge returns.

    Specifically in FINRA’s notice, the regulator stated that registration is a good indicator of sound investment advice, and offers the Investor.gov tool as a means to check; however, even registered firms and professionals can offer claims that sound too good to be true, so “be wary.” FINRA also warned about investing in companies involved in AI, often using catchy buzzwords or making claims to “guarantee huge gains.” Some companies may engage in pump-and-dump schemes where promoters “pump” up a stock price by spreading false information, then “dump” their own shares before the stock’s value drops. FINRA’s guidance additionally discussed the use of celebrity endorsements to promote an investment using social media; FINRA states that social media has become “more saturated with financial content than ever before” leading to the rise of “finfluencers.” Finally, FINRA mentioned how AI-enabled technology allows scammers to create “deepfake” videos and audio recordings to spread false information. Scammers have been using AI to impersonate a victim’s family members, a CEO announcing false news to manipulate a stock’s price, or how it can create realistic marketing materials.

    The CFTC’s advisory highlighted how scammers use AI to create algorithmic trading platforms using “bots” that automatically buy and sell. In one case cited by the CFTC, a scammer defrauded customers into selling him nearly 30,000 bitcoins, worth over $1.7 billion at the time. The CFTC posted a Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets. The Request listed eight questions addressing current and potential uses of AI by regulated entities, and several more addressing concerns regarding the use of AI in regulated markets and entities for the public to respond to.

    Financial Crimes FINRA Artificial Intelligence CFTC Securities Exchange Commission Fraud Securities

Pages

Upcoming Events