Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • EU Commission, Council, and Parliament agree on details of AI Act

    Privacy, Cyber Risk & Data Security

    On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act).  The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.

    Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards.  Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).

    Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.

    Privacy, Cyber Risk & Data Security Privacy European Union Artificial Intelligence Privacy/Cyber Risk & Data Security Of Interest to Non-US Persons

  • NYDFS settles with title insurance company for $1 million

    Privacy, Cyber Risk & Data Security

    On November 27, the NYDFS entered into a consent order with a title insurance company, which required the company to pay $1 million for failing to maintain and implement an effective cybersecurity policy and correct a cybersecurity vulnerability. The vulnerability allowed members of the public to access others’ nonpublic information, including driver’s license numbers, social security numbers, and tax and banking information. The consent order indicates the title insurance company discovered the vulnerability as early as 2018. The title insurance company’s failure to correct these changes violated Section 500.7 of the Cybersecurity Regulation.

    In May 2019, a cybersecurity journalist published an article on the existence of a vulnerability in the title insurance company’s application, that led to a public exposure of 885 million documents, some found through search engine results. The journalist noted that “replacing the document ID in the web page URL… allow[ed] access to other non-related sessions without authentication.” Following the cybersecurity journalist’s article, and as required by Section 500.17(a) of the Cybersecurity Regulation, the title insurance company notified NYDFS of its vulnerability, at which point NYDFS investigated further. The title insurance company has been ordered to pay the penalty no later than ten days after the effective date.

    Privacy, Cyber Risk & Data Security State Issues Securities NYDFS Auto Insurance Enforcement

  • CFTC speech highlights new executives, dataset use, and AI Task Force

    Privacy, Cyber Risk & Data Security

    On November 16, the Chairman of the CFTC, Rostin Behnam, delivered a speech during the 2023 U.S. Treasury Market Conference held in New York where he showcased the CFTC’s plans to better use data and roll out an internal AI task force. One of the CFTC’s initiatives comes with the hiring of two new executive-level roles: a Chief Data Officer and a Chief Data Scientist. These executives will manage how the CFTC uses AI tools, and oversee current processes, including understanding large datasets, cleaning the datasets, identifying and monitoring pockets of stress, and combating spoofing.

    The CFTC also unveiled its plans to create an AI Task Force and to “gather[] information about the current and potential uses of AI by our registered entities, registrants, and market participants in areas such as trading, risk management, and cybersecurity.” The Commission plans to obtain feedback for the AI Task Force through a formal Request for Comment process in 2024. The CFTC hopes these comments will help the agency create a rulemaking policy on “safety and security, mitigation of bias, and customer protection.”

    Privacy, Cyber Risk & Data Security CFTC Big Data Artificial Intelligence Spoofing

  • NYDFS introduces guidelines for coin-listing and delisting policies in virtual currency entities

    State Issues

    On November 15, NYDFS announced new regulatory guidance which adopts new requirements for coin-listing and delisting policies of DFS-regulated virtual currency entities, updating its 2020 framework for each policy. After considering public comments, the new guidance aims to enhance standards for self-certification of coins and includes requirements for risk assessment, advance notification, and governance. It emphasizes stricter criteria for approving coins and mandates adherence to safety, soundness, and consumer protection principles. Virtual currency entities must comply with these guidelines, requiring DFS approval for coin-listing policies before self-certifying coins, and submitting detailed records for ongoing compliance review. The guidance also outlines procedures for delisting coins and necessitates virtual currency entities to have an approved coin-delisting policy.

    As an example under coin listing policy framework, the letter states that a virtual currency entity risk assessment must be tailored to a virtual currency entity's business activity and can include factors such as (i) technical design and technology risk; (ii) market and liquidity risk; (iii) operational risk; (iv) cybersecurity risk; (v) illicit finance risk; (vi) legal risk; (vii) reputational risk; (viii) regulatory risk; (ix) conflicts of interest; and (x) consumer protection. Regarding consumer protection, NYDFS says that virtual currency entities must “ensure that all customers are treated fairly and are afforded the full protection of all applicable laws and regulations, including protection from unfair, deceptive, or abusive practices.”

    Similar to the listing policy framework, the letter provides a fulsome delisting policy framework. The letter also stated that all virtual currency entities must meet with the DFS by December 8 to preview their draft coin-delisting policies and that final policies must be submitted to DFS for approval by January 31, 2024.

    State Issues Privacy Agency Rule-Making & Guidance Fintech Cryptocurrency Digital Assets NYDFS New York Consumer Protection

  • President Biden issues Executive Order targeting AI safety

    Federal Issues

    On October 30, President Biden issued an Executive Order (EO) outlining how the federal government can promote artifical intelligence (AI) safety and security to protect US citizens’ rights by: (i) directing AI developers to share critical information and test results with the U.S. government; (ii) developing standards for safe and secure AI systems; (iii) protecting citizens from AI-enabled fraud; (iv) establishing a cybersecurity program; and (v) creating a National Security Memorandum developed by the National Security Council to address AI security.

    President Biden also called on Congress to act by passing “bipartisan data privacy legislation” that (i) prioritizes federal support for privacy preservation; (ii) strengthens privacy technologies; (iii) evaluates agencies’ information collection processes for AI risks; and (iv) develops guidelines for federal agencies to evaluate privacy-preserving techniques. The EO additionally encourages agencies to use existing authorities to protect consumers and promote equity. As previously covered by InfoBytes, the FCC recently proposed to use AI to block unwanted robocalls and texts). The order further outlines how the U.S. can continue acting as a leader in AI innovation by catalyzing AI research, promoting a fair and competitive AI ecosystem, and expanding the highly skilled workforce by streamlining visa review.

    Federal Issues Privacy, Cyber Risk & Data Security White House Artificial Intelligence Biden Executive Order Consumer Protection

  • Software provider settles allegations related to data breach

    Privacy, Cyber Risk & Data Security

    On October 5, a software provider serving nonprofit fundraising entities agreed to pay almost $50 million to settle claims with 49 states and the District of Columbia alleging that the provider maintained insufficient data security measures and inadequately responded to a 2020 data breach. Specifically, the settlement resolved claims that the software provider violated state consumer protection laws, breach-notification laws, and the Health Insurance Portability and Accountability Act (HIPAA).

    According to the allegations, the data breach exposed donor information, including Social Security numbers and financial records, of over 13,000 nonprofit groups and organizations and the provider waited two months before informing these clients of the breach.

    The settlement requires the provider to improve its cybersecurity protections and breach notification procedures.

    Earlier this year, the software provider also settled claims with the SEC for $3 million to address allegations of misleading disclosures relating to the same 2020 data breach.

     

    Privacy, Cyber Risk & Data Security SEC Data Breach HIPAA Consumer Protection Settlement

  • OCC releases bank supervision operating plan for FY 2024

    On September 28, the OCC’s Committee on Bank Supervision released its bank supervision operating plan for fiscal year 2024. The plan outlines the agency’s supervision priorities and highlights several supervisory focus areas including: (i) asset and liability management; (ii) credit; (iii) allowances for credit losses; (iv) cybersecurity; (v) operations; (vi) digital ledger technology activities; (vii) change in management; (viii) payments; (ix) Bank Secrecy Act/AML compliance; (x) consumer compliance; (xi) Community Reinvestment Act; (xii) fair lending; and (xiii) climate-related financial risks.

    Two of the top areas of focus are asset and liability management and credit risk. In its operating plan the OCC says that “Examiners should determine whether banks are managing interest rate and liquidity risks through use of effective asset and liability risk management policies and practices, including stress testing across a sufficient range of scenarios, sensitivity analyses of key model assumptions and liquidity sources, and appropriate contingency planning.” With respect to credit risk, the OCC says that “Examiners should evaluate banks’ stress testing of adverse economic scenarios and potential implications to capital” and “focus on concentrations risk management, including for vulnerable commercial real estate and other higher-risk portfolios, risk rating accuracy, portfolios of highest growth, and new products.”

    The plan will be used by OCC staff to guide the development of supervisory strategies for individual national banks, federal savings associations, federal branches and agencies of foreign banking organizations, and certain identified third-party service providers subject to OCC examination.

    The OCC will provide updates about these priorities in its Semiannual Risk Perspective, as InfoBytes has previously covered here.

    Bank Regulatory Federal Issues OCC Supervision Digital Assets Fintech Privacy, Cyber Risk & Data Security UDAP UDAAP Bank Secrecy Act Anti-Money Laundering Climate-Related Financial Risks Fair Lending Third-Party Risk Management Risk Management

  • CPPA continues efforts towards California Privacy Rights Act

    State Issues

    The California Privacy Protection Agency board is continuing its efforts to prepare regulations implementing the California Privacy Rights Act (covered by InfoBytes here and here).

    Draft risk assessment regulations and cybersecurity audit regulations were released in advance of the September 8 open meeting held by the board. Draft regulations on automated decision-making remain to be published. More comprehensive comment and feedback is expected on these draft regulations, unlike regulations finalized in March that were presented in a more robust state. As previously covered by InfoBytes, the California Privacy Protection Agency cannot enforce any regulations until a year after their finalization, adding a ticking reminder to the finalization process for these draft regulations.

    The draft cybersecurity regulations include thoroughness requirements for the annual cybersecurity audit, which must also be completed “using a qualified, objective, independent professional” and “procedures and standards generally accepted in the profession of auditing.” A management certification must also be signed certifying the business has not influenced the audit, and has reviewed the audit and understands its findings.

    The draft risk assessment regulations require conducting a risk assessment prior to initiating processing of consumers’ personal information that “presents significant risk to consumers’ privacy,” as set forth in an enumerated list include the selling or sharing of personal information; processing personal information of consumers under age 16; and using certain automated decision-making technology, including AI.

    State Issues Privacy California CCPA CPPA CPRA Compliance State Regulators Opt-Out Consumer Protection

  • NIST updates its Cybersecurity Framework

    Privacy, Cyber Risk & Data Security

    The National Institute of Standards and Technology (NIST) recently unveiled a proposed update to its Cybersecurity Framework, which was originally developed to provide information security guidelines for “critical infrastructure” like banking and energy industries. (Covered by InfoBytes here). The update includes a new, sixth pillar called “govern” that provides categories to facilitate executive oversight; manage enterprise risk (including supply chain risk); and effective alignment of enterprise resources, strategies, and risk, emphasizing that “cybersecurity is a major source of enterprise risk and a consideration for senior leadership.” This pillar will also guide organizations’ leadership in making internal decisions to support its cybersecurity strategy. The framework draft also updated its implementation guidance, especially for creating profiles that tailor guidance for certain situations. Additionally, NIST included implementation examples that are particularly beneficial for smaller firms. The framework’s lead developer, Cherilyn Pascoe, mentioned the framework has proven useful across many different sectors like small businesses and foreign governments, therefore it was updated to be a useful tool to sectors, regardless of type or size, outside of those designated as critical. A major goal of the updated version of the framework is to show organizations how to leverage existing technology frameworks, standards, and guidelines to implement NIST’s framework. Furthermore, the framework title changed from “Framework for Improving Critical Infrastructure Cybersecurity” to “The Cybersecurity Framework” to reflect its expanded inclusivity and wide adoption.

    Public comments must be received by November 4.

    Privacy, Cyber Risk & Data Security Federal Issues NIST Risk Management

  • Governor Hochul unveils statewide cybersecurity strategy for New York

    State Issues

    On August 9, Governor Hochul announced New York’s first-ever statewide cybersecurity strategy to protect the state’s digital infrastructure from cyber threats. The cybersecurity strategy articulates a set of high-level objectives and agency roles and responsibilities, as well as outlines how existing and planned initiatives will be weaved together in a unified approach. The central principles of the strategy are unification, resilience, and preparedness, with a focus on state agencies working together with local governments to strengthen the entire state’s defenses. Included in the plan was a $600 million commitment to improve cybersecurity, including (i) a $90 million investment for cybersecurity in Fiscal Year 2024; (ii) $500 million to enhance healthcare information technology; and (iii) $7.4 million for law enforcement entities to expand their cybercrime capabilities.

    State Issues Privacy, Cyber Risk & Data Security New York Dodd-Frank Federal Reserve Bank Merger Act

Pages

Upcoming Events