On April 25, 2023, officials from four federal agencies released a joint statement pledging to increase "enforcement efforts to protect the public from bias in automated systems and artificial intelligence" ("AI"). The agencies taking part in this effort include the Equal Employment Opportunity Commission ("EEOC"), the Federal Trade Commission ("FTC"), the U.S. Department of Justice ("DOJ"), and the Consumer Financial Protection Bureau ("CFPB"). As we have previously reported here and here, these agencies have previously expressed concerns about the potential that the use of automated systems for employment decision-making can cause discriminatory outcomes. The joint statement outlines the agencies' recent accomplishments in this area and reiterates that while automated systems offer increased efficiency, greater productivity, and other benefits, they have "the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes."

The joint statement emphasizes that existing legal authorities apply equally to the use of automated systems and other new technology. It then highlights the agencies' individual and collaborative efforts to draw attention to the potentially harmful uses of automated systems. For example, it mentions the EEOC's technical assistance document, published this past May, which explains that the ADA applies to the use of software, algorithms, and AI in employment decision-making, and the FTC's report issued to Congress last June warning that AI tools can be inaccurate, biased, and discriminatory and may promote invasive commercial surveillance.

Finally, the statement identifies a few key areas in which automated systems can lead to illegal discrimination or violate other laws. First, the statement notes that the output of automated systems may be skewed by datasets that "incorporate historical bias," "[are] unrepresentative or imbalanced," or "contain other types of errors." It also cautions that unlawful discrimination may result when automated systems correlate data with protected classes. Second, the statement describes certain automated systems as "black boxes" and warns that this lack of transparency can make it difficult for people to know whether the system is fair. Third, the statement recognizes that developers of automated systems often do not fully understand the contexts in which their systems are being used, so the systems may be developed under flawed assumptions about users and their practices.

The joint statement concludes with a pledge by the agencies to protect individuals' rights vigorously, whether violations happen through the use of advanced technology or more traditional means.

Takeaway for Employers

While the joint statement does not represent any change in law or regulation, it provides a glimpse into priority enforcement areas by the EEOC, FTC, DOJ, and CFPB. Employers who use automated systems to make employment decisions should regularly test and validate their existing systems and consult with counsel to discuss any legal obligations, particularly as more laws regulating AI are being proposed and enacted (see, e.g., here and here). We will continue to monitor for further developments.

EEOC, FTC, And Other Federal Agencies Release Joint Statement On Confronting Bias And Discrimination In AI And Automated Systems

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.