A recent blog post from the FTC provides further indication that the agency will be increasing its focus on racial bias in artificial intelligence algorithms. On April 20, 2021, the FTC’s Business Blog warned that companies must hold themselves accountable for the performance of artificial intelligence algorithms or “the FTC [will] do it for you.” Citing a recent study by the American Medical Informatics Association that found predictive models for allocating ICU beds may reflect racial biases, the FTC signaled that it is likely to use its enforcement powers to crack down on companies that use AI irresponsibly. In particular, the FTC blog post emphasized its power to enforce:
- Section 5 of the FTC Act—Under Section 5 of the FTC Act, the agency may take enforcement action against companies that engage in unfair or deceptive practices;
- The Fair Credit Reporting Act—The FTC signaled that it may use its enforcement powers against companies that use algorithms that deny individuals equal employment, housing, credit, insurance, or other benefits; and
- The Equal Credit Opportunity Act—According to the FTC blog post, the ECOA can be used against companies that “use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.”
While the blog post does not provide specific guidelines for the use of artificial intelligence algorithms, the FTC identified various protections companies should consider when deploying artificial intelligence algorithms. First, ensure that data sets used to create artificial intelligence algorithms are not missing information from particular populations that exacerbate inequalities in legally protected groups. Second, test artificial intelligence algorithms periodically to ensure they don’t discriminate against protected classes. Third, consider ways to increase transparency and obtain independent review of artificial intelligence algorithms to identify and correct problems of bias. Fourth, ensure that representations about the use of artificial intelligence are true, not misleading, and backed by solid evidence. Fifth, be transparent and accurate in informing consumers about how their data is used. Sixth, ensure that artificial intelligence algorithms do not “cause more harm than good.” And finally, the FTC urged that companies must hold themselves accountable for algorithms’ performance, including through greater transparency and independent review protocols.
The FTC posting comes on the heels of comments by FTC Chair Rebecca Slaughter in multiple public forums that a focus of her tenure will be on policing companies’ use of artificial intelligence algorithms that result in discrimination. Companies deploying artificial intelligence technology should be aware of the FTC’s increased focus in the area of algorithmic bias and be ready to demonstrate that their technologies are free from those biases.