Gensler highlights challenges of AI-based models
On July 17, SEC Chair Gary Gensler spoke before the National Press Club, where he discussed opportunities and challenges stemming from the use of artificial intelligence (AI)-based models. While Gensler acknowledged that AI has the potential to promote greater financial inclusion and enhance user experience, he warned that there are also challenges associated with AI advancements that need to be considered at both the individual and broader economic levels. At the individual (micro) level, Gensler explained that AI’s predictive capabilities allow for personalized communication, product offerings, and pricing. However, this individualized approach (also known as “narrowcasting”) also raises questions about how individuals will respond to tailored messages and offers, he said, pointing out that when AI models are used to make important decisions such as job selection, loan approvals, credit decisions, and healthcare allocation, issues related to explainability, bias, and robustness become a concern. Gensler elaborated that AI models often produce unexplainable decisions and outcomes due to their nonlinear and hyper-dimensional nature. Furthermore, AI may also make it more difficult to ensure fairness and can inadvertently perpetuate biases present in historical data or use latent features that act as proxies for protected characteristics, Gensler said, adding that “the challenges of explainability may mask underlying systemic racism and bias in AI predictive models.”
Gensler explained that these data analytics challenges are not new and that in the late 1960s and early 1970s, the Fair Housing Act, FCRA, and ECOA were, in part, driven by similar issues. He warned advisers and brokers that as they incorporate these technologies into their services, they must ensure that when offering advice and recommendations (whether or not based on AI) they consider the best interests of their clients and retail customers and not place their interests ahead of investors’ interests.