House task force holds hearing on AI bias
On February 12, the House Financial Services Committee’s Task Force on Artificial Intelligence (AI) held a hearing entitled “Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services.” As previously covered by InfoBytes, the Committee created the task force to determine how to use AI in the financial services industry and examine issues surrounding algorithms, digital identities, and combating fraud. According to the Committee’s memorandum regarding the hearing, AI’s key technology is machine learning (ML)—“a process that may rely on pre-set rules to solve problems (also known as algorithms) without” or with only limited involvement of humans. Witnesses largely from the fields of computer science and AI delved into AI and ML at the hearing, discussing how human biases can be perpetuated in algorithms using historical data as input and how to best ensure fairness and accuracy. It was agreed that fairness has many different definitions that must be considered when creating algorithms. Witnesses provided testimony that when striving for fairness for one protected class, there may necessarily be tradeoffs resulting in less fairness to another protected class. Among other things, committee members questioned whether it is possible to formulate an algorithm that guarantees fairness and were urged not to focus too much on algorithms, but to also consider the data—where it came from, its quality and appropriateness—as potentially flawed data that could likely result in flawed outputs.