On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued new technical guidance on how to measure adverse impact when employment selection tools use artificial intelligence (AI), titled "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964." Companies currently using or considering using AI or other algorithmic decision-making in the employment context should look to this nonbinding technical assistance when validating employment tests and other selection devices under the Uniform Guidelines on Employee Selection Procedures (the "Guidelines"), which are the uniform standards that govern adverse impact and job relatedness under Title VII of the Civil Rights Act of 1964 (Title VII).

Key Takeaways

  • The Guidelines apply when AI is used to make or inform employment decisions about whether to hire, retain, promote "or take similar actions."
  • A selection process that uses AI could be found to have a disparate impact in violation of Title VII if the selection rate of individuals of a particular race, color, religion, sex or national origin, or a "particular combination of such characteristics" (e.g., a combination of race and sex, such as for applicants who are Asian women), is less than 80% of the rate of the non-protected group.
  • Employers are responsible for any adverse impact caused by AI tools that are purchased or administered by third party AI vendors and cannot rely on the AI vendor's own predictions or studies of whether their AI tools will cause adverse impact on protected groups.
  • Employers are advised to assess early and often the impact of selection tools that use AI to make or inform employment decisions and, where there is adverse impact, ensure such tools have been properly validated under the Guidelines to establish that such tools are job related and consistent with business necessity.

The EEOC has issued new guidance about how to assess adverse impact when employers use AI to make or inform employment decisions regarding hiring, promotion, termination or similar actions. The new guidance confirms that the same standards that apply to other selection devices apply to AI, which are those set forth in the EEOC's Guidelines.

Types of AI used in Employment Selection

The guidance explains that "algorithmic decision-making tools" broadly refer to all different types of systems that might include AI, as well as other automated systems and software.1 Some examples of algorithmic decision-making tools that employers might rely on during selection procedures include:

  • Resume scanners that prioritize applications using certain keywords.
  • Video interviewing software that evaluates candidates based on their facial expressions and speech patterns.
  • "Virtual assistants" or "chatbots" that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements.
  • Testing software that provides "job fit" scores for applicants or employees regarding their personalities, aptitudes, cognitive skills or perceived "cultural fit" based on their performance on a game or on a more traditional test.
  • Employee monitoring software that rates employees on the basis of their keystrokes or other factors. 2

See also LaborSpeak: AI Regulations in Employment Decisions for more discussion on AI regulation in employment decisions.

These types of AI are not inherently unlawful, and by no means the only tools that may be used for employment selection procedures. Where tools like these are used to make or inform selection decisions, the Guidelines should be followed to ensure that employers are not vulnerable to adverse impact discrimination claims under Title VII.

When Does an AI Violate Title VII?

Under Title VII, neutral tests or selection procedures may violate Title VII if they disproportionately exclude applicants or employees based on race, color, religion, sex or national origin (i) and are not job related and consistent with business necessity; or (ii) an alternative selection procedure exists that would meet the employer's needs without causing the disproportionate exclusion.3 This disproportionate exclusion based on a protected characteristic is referred to as a "disparate" or "adverse" impact.

According to the new guidance, when an employer is using an algorithmic decision-making tool as a selection procedure, that tool may violate Title VII prohibitions against disparate impact discrimination if:

  1. The tool has an adverse impact on individuals of a particular race, color, religion, sex or national origin.
  2. The employer cannot establish that the use of the tool is related to the job and consistent with business necessity.
  3. A less discriminatory tool was available but not used.

Notably, the EEOC specifies that adverse impact may be found when an employment selection tool disproportionately screens out "individuals with a combination of [protected] characteristics" when compared with "the non-protected group." The EEOC does not explain or describe, however, who should be included in the "non-protected group" when analyzing adverse impact against individuals with more than one protected characteristic. The EEOC also reminds employers that they may still be liable for Title VII violations when they use algorithmic decision-making tools designed or administered by a third-party vendor.4 This can include circumstances where an employer relies on adverse impact studies conducted by their third-party vendor using the vendor's database from other employers. What matters under Title VII is whether an AI tool is causing adverse impact on the employer's applicants and employees, irrespective of whether the AI vendor has evidence of no adverse impact when their AI tool was used in other employment settings.

Measuring AI Outcomes for Adverse Impact

The guidance explains how employers can monitor their algorithmic decision-making tools for adverse impact, which is nothing new. The guidance simply reiterates the method set forth in the Guidelines, known as the "four-fifths rule," which compares the selection rate of the protected group to the selection rate of the non-protected group to determine whether the ratio between the two groups is less than 80%.5 However, the EEOC cautions that the 80% rule is "merely a rule of thumb" and not definitive proof that a selection process is lawful under Title VII. Instead employers might want to rely on other tests of statistical significance depending on their circumstances. The identical language appears in the Guidelines.

In the event an employer's audit discovers that their algorithmic decision-making tool has an adverse impact, the guidance states that the employer can take steps to reduce the impact or select a different tool to avoid violating Title VII.

The ultimate takeaway from the EEOC's new guidance—that AI tools used for employment selection should be assessed under the Guidelines' adverse impact standards—is nothing new, but comes on the heels of the EEOC opening their AI initiative in 2021 and follows a larger pattern of increased scrutiny among federal enforcement agencies of potential bias caused by algorithmic decision-making. Last year saw both the release of joint guidance documents from the EEOC and Department of Justice (DOJ) on disability bias and AI, followed by the Blueprint for an AI Bill of Rights from the White House. Guidance documents like these both support the agencies' strategic focus on AI, and can provide useful information on avoiding agency enforcement actions.

Footnotes

1. Equal Employment Opportunity Comm'n, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023), available at https://www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used.

2. Id. At 1.

3. Id.

4. Id.

5. Id.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.