As has been widely reported, the White House issued a comprehensive and sweeping (some might say overly broad) Executive Order on October 30 about "the Safe, Secure, and Trustworthy Development, and Use of Artificial Intelligence." Interest in Artificial Intelligence (AI) has skyrocketed — as have concerns about how it is developed and implemented in the workplace and society at large, including reports about fake images, impacts on elections, predictive outputs containing stereotypes, and other biases based on race, religion, gender, and other risks. Recognizing this, the Order notes that AI "holds extraordinary potential for both promise and peril," but that "irresponsible use could exacerbate societal harms, such as fraud, discrimination, bias, and disinformation" and could cause or contribute to other significant harm.

An Executive Order is a written directive from the President of the United States to the Executive Branch of the federal government. As such, it has no direct impact on private sector businesses, but it does reveal where the White House is staking out its positions concerning many aspects of the burgeoning AI wave. The Order sets goals and directs certain implementation steps, such as for federal agencies to develop guidelines and best practices in key areas (e.g., privacy, national security, consumer protection, cybersecurity, or intellectual property) and for responsible innovation to develop and train AI tools. It also refers several times to concerns about the impact of AI on the workplace, so it will be crucial for employers to track legislative and regulatory AI debate and developments.

Here are a few takeaways:

  • The Order specifically refers to supporting workers, including that workers "need to sit at the table, including through collective-bargaining" to secure a role in "new jobs and industries created by AI development." The Order cautions against AI being used to undermine worker rights, to "encourage undue worker surveillance... or cause harmful labor–force disruptions," or to be misused to cause or contribute to discrimination and bias, among other harms.
  • Sections 7.1 and 7.3 of the Order specifically address concerns about discrimination in the workplace. For example, Section 7.1 directs the Assistant Attorney General for the Department of Justice, Civil Rights Division, to coordinate with other Federal civil rights offices to discuss how to "prevent and address discrimination in the use of automated systems, including algorithmic discrimination." Section 7.3 directs the Secretary of Labor to publish guidance for federal contractors "regarding non-discrimination in hiring involving AI and other technology-based hiring systems." As employers increasingly implement AI tools and strategies, such as to make recruiting and hiring more efficient by drafting job descriptions, screening applicants, identifying key job functions, and more, they will likely be governed by a new regulatory scheme with heightened scrutiny of their use of such tools and of the due diligence employers engage in to validate those tools and to ensure legitimate and unbiased predictive outcomes.

We will monitor implementation of the directives in the Executive Order as well as legislative and regulatory activity that may impact employers' consideration and use of rapidly evolving AI technologies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.