Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • Fed’s Cook delivers remarks regarding artificial intelligence

    On October 1, Fed Board of Governors Member Lisa D. Cook delivered remarks titled “Artificial Intelligence, Big Data, and the Path Ahead for Productivity” at a conference organized by the Federal Reserve Banks of Atlanta, Boston, and Richmond. She addressed the implications of new technologies on productivity by emphasizing the uncertainty surrounding AI’s impact on labor markets and productivity. While AI adoption is becoming more widespread, Cook stated that its effect on productivity and employment remains unclear.

    Cook highlighted that AI and generative AI could drive significant productivity gains across industries. She cited a study from the Federal Reserve Bank of St. Louis finding that generative AI was being adopted faster than previous technologies like the internet and personal computers. Despite its potential, Cook noted that recent productivity gains have been modest. AI productivity improvements depend on firms, workers and policymakers. She mentioned that adapting AI to specific business contexts can be complex and time-consuming, requiring significant investments and organizational changes.

    Bank Regulatory AI Federal Reserve Georgia Big Data

  • NYDFS issues guidance on AI insurance discrimination

    State Issues

    On July 11, NYDFS issued Insurance Circular No. 7 to address the use of AI systems and External Consumer Data and Information Sources (ECDIS) in the underwriting and pricing of insurance policies in New York State. NYDFS outlined its expectations for insurers regarding the responsible and compliant use of the technologies, emphasizing the need to abide by existing laws and regulations that prohibit unfair discrimination.

    Key points in the circular included:

    • Definitions of AI systems and ECDIS, and insurers will be expected to understand that “traditional underwriting” does not include the use of these technologies.
    • Insurers must conduct proxy assessments to ensure ECDIS will not result in discrimination based on protected classes. The circular also clarified what the proxy assessment may entail.
    • NYDFS expected insurers to maintain robust governance and risk management practices, including board and senior management oversight, policies and procedures, and risk management frameworks.
    • Insurers will be responsible for the oversight of third-party vendors providing AI systems and ECDIS, ensuring compliance with laws and regulations.
    • NYDFS will not guarantee the confidentiality of submitted information, as it must comply with disclosure laws.
    • The circular emphasized transparency, requiring insurers to disclose the use of AI systems and ECDIS in underwriting and pricing decisions, and to provide reasons for any adverse decisions to consumers.
    • Insurers must keep up-to-date documentation and be prepared for NYDFS audits and reviews regarding the use of these technologies.

    State Issues NYDFS AI Insurance Consumer Protection Disclosures

  • SEC Chair Gensler weighs in on AI risks and SEC’s positioning

    Privacy, Cyber Risk & Data Security

    On February 13, SEC Chair Gary Gensler delivered a speech, “AI, Finance, Movies, and the Law” before the Yale Law School. In his speech, Gensler spoke on the crossovers between artificial intelligence (AI) and finance, system-wide risks on a macro-scale, AI offering deception, AI washing, and hallucinations, among other topics.

    Gensler discussed the benefits of using AI in finance, including greater financial inclusion and efficiencies. However, he highlighted that the use of AI amplifies many issues, noting how AI models can be flawed in making decisions, propagating biases, and offering predictions. On a system-wide level, Gensler opined how policy decisions will require new thinking to overcome the challenges to financial stability that AI could create.  Gensler addressed AI washing, stating that it may violate securities laws, emphasizing that any disclosures regarding AI by SEC registrants should still follow the “basics of good securities lawyering” for disclosing material risks, defining the risk carefully, and avoiding disclosures that could mislead the public regarding the use of an AI model. Lastly, Gensler warned about AI hallucinations, saying that advisors or brokers are not supposed to give investment advice based on inaccurate information, closing with “You don’t want your broker or advisor recommending investments they hallucinated while on mushrooms.”

    Privacy, Cyber Risk & Data Security Artificial Intelligence Securities Exchange Act Securities AI

  • FTC, DOJ convene with G7 on AI policy future

    Securities

    On November 8, the FTC and DOJ met with the G7 Competition Authorities and Policymakers’ Summit on how to better regulate AI while addressing its competitive concerns. The Summit took place in Tokyo, Japan, and both the FTC’s and the DOJ’s Antitrust Division participated with the international group. The G7 issued a statement on how generative AI can pose not only anti-competitive risks, but also risks in “privacy, intellectual property rights, transparency and other concerns.” All policymakers shared concerns on how to best enforce fair competition laws with AI, iterating that “existing competition law applies to [AI]” and that they were “prepared to confront abuses if AI becomes dominated by a few players with market power.” The G7 stated a need to enforce competition laws and “develop policies necessary to ensure that principles of fair competition are applied to digital markets.”

    The G7’s report outlines its initiatives to promote and protect competition in digital markets, its commitment to address competition concerns, and its recognition of the need for internal cooperation on digital competition.

    Securities G7 FTC DOJ Antitrust AI

  • FCC’s proposes inquiry on AI’s role in unwanted robocalls and texts

    Federal Issues

    On October 20, FCC Chairwoman Rosenworcel announced a proposed inquiry for how artificial intelligence could impact unwanted robocalls and texts. If adopted during the Commission’s forthcoming public open meeting on November 15, 2023, this proposal will initiate an examination of how the use of AI technologies could impact regulation under the Telephone Consumer Protection Act (TCPA). Specifically, the inquiry would seek public comment on (i) how AI technology fits into the commission’s duties outlined in the TCPA; (ii) the circumstances under which future AI technology would fall under TCPA; (iii) the influence of AI on existing regulatory structures and the development of future policies; (iv) whether the commission should explore methods of verifying the legitimacy of AI-generated voice or text content from reliable sources; and (v) next steps.

    Federal Issues Robocalls AI TCPA FCC Consumer Protection

Upcoming Events