Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • FCC Chairwoman proposes making all AI-generated robocalls “illegal” to help State Attorneys General

    Agency Rule-Making & Guidance

    On January 31, FCC Chairwoman, Jessica Rosenworcel, released a statement proposing that the FCC “recognize calls made with AI-generated voices are ‘artificial’ voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocalls scams targeting consumers illegal.” Specifically, the FCC’s proposal would make voice cloning technology used in robocall scams illegal, which has been used to impersonate celebrities, political candidates, and even close family members. Chairwoman Rosenworcel stated, “No matter what celebrity or politician you favor… it is possible we could all be a target of these faked calls… That’s why the FCC is taking steps to recognize this emerging technology as illegal… giving our partners at State Attorneys General offices… new tools they can use to crack down on these scams and protect customers.”

    This action comes after the FCC released a Notice of Inquiry last month where the FCC received comments from 26 State Attorneys General to understand how the FCC can better protect consumers from AI-generated telemarking, as covered by InfoBytes here. This is not the first time the FCC has targeted robocallers: as previously covered by InfoBytes in October 2023, the FCC proposed an inquiry into how AI is used to create unwanted robocalls and texts; in September 2023, the FCC updated its rules to curb robocalls under the Voice over Internet Protocol, covered here.

    Agency Rule-Making & Guidance FCC TCPA Artificial Intelligence Robocalls State Attorney General

  • White House provides three-month update on its AI executive order

    Federal Issues

    On January 29, President Biden released a statement detailing how federal agencies have fared in complying with Executive Order 14110 regarding artificial intelligence (AI) development and safety. As previously covered by InfoBytes, President Biden’s Executive Order from October 30, 2023, outlined how the federal government can promote AI safely and in a secure way to protect U.S. citizens’ rights.

    The statement notes that federal agencies have (i) used the Defense Production Act to have AI developers report vital information to the Department of Commerce; (ii) proposed a draft rule for U.S. cloud companies to provide computing power for foreign AI training, and (iii) completed risk assessments for “vital” aspects of society. The statement further outlines how the NSF (iv) managed a pilot program to ensure that AI resources are equitably accessible to the research and education communities; (v) began the EducateAI initiative to create AI educational opportunities in K-12 through undergraduate institutions; (vi) promoted the funding of a new Regional Innovation Engines to assist in creating breakthrough clinical therapies; (vii) the OPM launched the Tech Talent Task Force to accelerate hiring data scientists in the government, and (viii) the DHHS established an AI Task Force to provide “regulatory clarity” in health care. Lastly, the statement provides additional information on various agency activities that have been completed in response to the Executive Order. More on this can be found at ai.gov.

    Federal Issues Biden White House Artificial Intelligence Executive Order

  • Securities regulators issue guidance and an RFC on AI trading scams

    Financial Crimes

    On January 25, FINRA and the CFTC released advisory guidance on artificial intelligence (AI) fraud, with the latter putting out a formal request for comment. FINRA released an advisory titled “Artificial Intelligence (AI) and Investment Fraud” to make investors aware of the growing popularity of scammers committing investment fraud using AI and other emerging technologies, posting the popular scam tactics, and then offering protective steps. The CFTC released a customer advisory called “AI Won’t Turn Trading Bots into Money Machines,” which focused on trading platforms that claim AI-created algorithms can guarantee huge returns.

    Specifically in FINRA’s notice, the regulator stated that registration is a good indicator of sound investment advice, and offers the Investor.gov tool as a means to check; however, even registered firms and professionals can offer claims that sound too good to be true, so “be wary.” FINRA also warned about investing in companies involved in AI, often using catchy buzzwords or making claims to “guarantee huge gains.” Some companies may engage in pump-and-dump schemes where promoters “pump” up a stock price by spreading false information, then “dump” their own shares before the stock’s value drops. FINRA’s guidance additionally discussed the use of celebrity endorsements to promote an investment using social media; FINRA states that social media has become “more saturated with financial content than ever before” leading to the rise of “finfluencers.” Finally, FINRA mentioned how AI-enabled technology allows scammers to create “deepfake” videos and audio recordings to spread false information. Scammers have been using AI to impersonate a victim’s family members, a CEO announcing false news to manipulate a stock’s price, or how it can create realistic marketing materials.

    The CFTC’s advisory highlighted how scammers use AI to create algorithmic trading platforms using “bots” that automatically buy and sell. In one case cited by the CFTC, a scammer defrauded customers into selling him nearly 30,000 bitcoins, worth over $1.7 billion at the time. The CFTC posted a Request for Comment on the Use of Artificial Intelligence in CFTC-Regulated Markets. The Request listed eight questions addressing current and potential uses of AI by regulated entities, and several more addressing concerns regarding the use of AI in regulated markets and entities for the public to respond to.

    Financial Crimes FINRA Artificial Intelligence CFTC Securities Exchange Commission Fraud Securities

  • FTC hosts tech summit on artificial intelligence; CFPB weighs in

    Agency Rule-Making & Guidance

    On January 25, the FTC hosted a virtual tech summit focused on artificial intelligence (AI). The summit featured speakers from the FTC––including all three commissioners––software engineers, lawyers, technologists, entrepreneurs, journalists, and researchers, among others. First, Commissioner Slaughter spoke on how there are three main acts that led to where we are today in creating guardrails for AI use: first, the emergence of social media; second, industry groups and whistleblowers rang the alarm on data privacy and forced regulators to play catch-up; third, regulators must now urgently grapple with difficult social externalities such as impacts on society and political elections.

    The first panel discussed the various business models at play in the AI space. One journalist spoke on the recent Hollywood writers’ strike, opining that copyright law is a poor legal framework by which to regulate AI, and suggested labor and employment law as a better model. An analyst at a venture capital firm discussed how her firm finds investment opportunities by reviewing which companies use a language-learning model, as opposed to the transformer model, which is more attractive to that firm.

    Before the second panel, Commissioner Bedoya discussed the need for fair and safe AI, and said that in order for the FTC to be successful, it must execute policy with two topics in mind: first, people need to be in control of technology and decision making, not the other way around; and second, competition must be safeguarded so that the most popular technology is the one that works the best, not just the one created by the largest companies.

    During the second panel, a lawyer from the CFPB spoke on how the CFPB is doing “a lot” with regards to AI, and that the CFPB gives AI technology no exceptions in the laws it oversees. The CFPB recently issued releases on how the “black box” model in credit decision making needs to be fair and free from bias. When discussing future AI enforcement actions, the CFPB lawyer said in a “high-level” way that AI enforcement is currently “capacity building”; they are building out their resources to be more intellectually diverse, including having recently created their technologist program. 

    Agency Rule-Making & Guidance FTC Artificial Intelligence CFPB Technology

  • NYDFS offers guidance to insurers on AI models

    State Issues

    On January 17, NYDFS issued a guidance letter on artificial intelligence (AI) intended to help licensed insurers understand NYDFS’s expectations for combating discrimination and bias when using AI in connection with underwriting. The guidance is aimed at all insurers authorized to write insurance in New York State and is intended to help insurers develop AI systems, data information systems, and predictive models while “mitigat[ing] potential harm to consumers.”

    The guidance letter states that while the use of AI can potentially result in more accurate underwriting and pricing of insurance, AI technology can also “reinforce and exacerbate” systemic biases and inequality. As part of the letter’s fairness principles, NYDFS states that an insurer should not use underwriting or pricing technologies “unless the insurer can establish that the data source or model… is not biased in any way” with respect to any class protected pursuant to New York insurance law. Further, insurers are expected to demonstrate that technology-driven underwriting and pricing decisions are supported by generally accepted actuarial standards of practice and based on actual or reasonably anticipated experience. It was last noted that these rules build on New York Governor Hochul’s statewide policies governing AI.

    State Issues NYDFS Artificial Intelligence GAAP Racial Bias Discrimination Insurance Underwriting

  • 26 State Attorneys General opine on FCC’s Notice of Inquiry regarding AI telemarketing

    Federal Issues

    On January 17, the State Attorneys General from 26 states submitted reply comments to the FCC’s Notice of Inquiry (the Notice) on how artificial intelligence (AI) technologies are impacting consumers. The information gleaned in response to the Notice is intended to help the FCC better protect consumers from AI-generated telemarketing in violation of the TCPA. The State AGs urged that any AI-generated voice should be considered an “artificial voice” under the TCPA to avoid “opening the door to potential, future rulemaking proceedings” that allow telemarketing agencies to use AI-assisted technologies in outbound calls without the prior written consent of a consumer. 

    Federal Issues State Attorney General FCC Artificial Intelligence Telemarketing TCPA

  • FTC report details key takeaways from AI and creative fields panel discussion

    Federal Issues

    On December 18, the FTC released a report highlighting key takeaways from its October panel discussion on generative artificial intelligence (AI) and “creative industries.” As previously covered by InfoBytes, the FTC hosted a virtual roundtable to hear directly from creators on how generative AI is affecting their work and livelihood given the FTC’s interest in understanding how AI tools impact competition and business practices. The report presents a summary of insights gathered during the roundtable and explains the FTC’s particular jurisdictional interest in regulating AI. The report explains that the FTC has brought several recent enforcement actions relating to AI and how the use of AI can potentially violate Section 5 of the FTC Act, which “prohibits unfair or deceptive acts or practices and unfair methods of competition.” Additionally, the report mentioned how President Biden’s recent Executive Order on the Safe, Secure and Trustworthy Development and Use of AI (covered by InfoBytes here), encourages the FTC to leverage its existing faculties to protect consumers from harms caused by AI and to ensure competition in the marketplace.  The FTC’s report explains that it is appropriately taking such actions, both through enforcement actions and by gathering information. The Commission additionally stipulated that training generative AI on “protected expression” made by a creator without the creator’s consent or the sale of that generated output could constitute an unfair method of competition or an unfair or deceptive practice. The FTC added that this may be amplified by actions that involve deceiving consumers, improperly using a creator’s reputation, reducing the value of a creator’s work, exposing private information, or otherwise causing substantial injury to consumers. The Commission further warned that “conduct that may be consistent with other bodies of law nevertheless may violate Section 5.”

    Federal Issues FTC Artificial Intelligence Competition Consumer Protection FTC Act Unfair

  • FSOC report highlights AI, climate, banking, and fintech risks; CFPB comments

    Privacy, Cyber Risk & Data Security

    On December 14, the Financial Stability Oversight Counsel released its 2023 Annual Report on vulnerabilities in financial stability risks and recommendations to mitigate those risks. The report was cited in a statement by the Director of the CFPB, Rohit Chopra, to the Secretary of the Treasury. In his statement, Chopra said “[i]t is not enough to draft reports [on cloud infrastructure and artificial intelligence], we must also act” on plans to focus on ensuring financial stability with respect to digital technology in the upcoming year. In its report, the FSOC notes the U.S. banking system “remains resilient overall” despite several banking issues earlier this year. The FSOC’s analysis breaks down the health of the banking system for large and regional banks through review of a bank’s capital and profitability, credit quality and lending standards, and liquidity and funding. On regional banks specifically, the FSOC highlights how regional banks carry higher exposure rates to all commercial real estate loans over large banks due to the higher interest rates.

    In addition, the FSOC views climate-related financial risks as a threat to U.S. financial stability, presenting both physical and transitional risks. Physical risks are acute events such as floods, droughts, wildfires, or hurricanes, which can lead to additional costs required to reduce risks, firm relocations, or can threaten access to fair credit. Transition risks include technological changes, policy shifts, or changes in consumer preference which can all force firms to take on additional costs. The FSOC notes that, as of September 2023, the U.S. experienced 24 climate disaster events featuring losses that exceed $1 billion, which is more than the past five-year annual average of 18 events (2018 to 2022). The FSOC also notes that member agencies should be engaged in monitoring how third-party service providers, like fintech firms, address risks in core processing, payment services, and cloud computing. To support this need for oversight over these partnerships, the FSOC cites a study on how 95 percent of cloud breaches occur due to human error. The FSOC highlights how fintech firms face risks such as compliance, financial, operational, and reputational risks, specifically when fintech firms are not subject to the same compliance standards as banks.

    Notably, the FSOC is the first top regulator to state that the use of Artificial Intelligence (AI) technology presents an “emerging vulnerability” in the U.S. financial system. The report notes that firms may use AI for fraud detection and prevention, as well as for customer service. The FSOC notes that AI has benefits for financial instruction, including reducing costs, improving inefficiencies, identifying complex relationships, and improving performance. The FSOC states that while “AI has the potential to spur innovation and drive efficiency,” it requires “thoughtful implementation and supervision” to mitigate potential risks.

    Privacy, Cyber Risk & Data Security Bank Regulatory FSOC CFPB Artificial Intelligence Banks Fintech

  • EU Commission, Council, and Parliament agree on details of AI Act

    Privacy, Cyber Risk & Data Security

    On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act).  The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.

    Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards.  Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).

    Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.

    Privacy, Cyber Risk & Data Security Privacy European Union Artificial Intelligence Privacy/Cyber Risk & Data Security Of Interest to Non-US Persons

  • FTC approves measures for compulsory process use for AI-related products and services

    Agency Rule-Making & Guidance

    On November 21, the FTC approved an omnibus resolution in a 3-0 vote, allowing the use of compulsory processes in nonpublic inquiries involving products and services produced or claimed to be produced by artificial intelligence (AI). This resolution aims to streamline the FTC staff's issuance of civil investigative demands (CIDs), in AI-related investigations while maintaining the Commission's authority to decide when CIDs are necessary. This resolution remains valid for 10 years. 

    Agency Rule-Making & Guidance Federal Issues FTC Artificial Intelligence

Pages

Upcoming Events