Seyfarth Synopsis: The draft text of the EU AI Act has been unofficially posted online, showing that there has been significant momentum towards the Act's final passage. Under the posted draft of the Act, the provisions regarding "high risk" AI systems will not go into effect until 36 months after the Act's "entry into force". Nevertheless, the Act's "high-risk" categorization of AI systems used in employment, including worker monitoring, has broad and immediate implications for employers using (or looking to use) AI in their employment processes. Employers everywhere should consider how their current AI practices align with the Act's broad requirements, because even outside of the EU, these standards and requirements are likely to be closely examined by legislators, regulators, and other stakeholders seeking to manage AI risks.

On Monday, January 22, 2024, two documents revealing details about the EU Artificial Intelligence Act were unofficially posted online, showing continued momentum towards the Act's final adoption, perhaps as soon as next month. Given this momentum, employers everywhere, and not just those in the EU, should pay close attention to the breadth of the EU AI Act's requirements.

In December 2023, the "trilogue" negotiations for the EU AI Act successfully concluded, with a provisional agreement reached between representatives of the European Commission, the Council of the European Union, and the European Parliament. While many high-level details about the negotiations were disclosed following the announcement of the provisional agreement, consistent with the trilogue process, work continued at a "technical level to finalise the details of the new regulation."

The exact text of the Act was not publicly disclosed until this week, when, on January 22, 2024, Luca Bertuzzi, a journalist, posted online a version of the "four-column" document labeled "Final draft as updated on 21/01," purportedly showing the three negotiating positions taken during the trilogue negotiations, with a fourth column showing the provisional agreement. The same day, Dr. Laura Caroli, a Senior Policy Advisor to the European Parliament, posted a copy of the EU AI Act's "pre-final text" which is expected to be voted on in February. Dr. Caroli clarified on social media, "The possible changes at this point will only be purely linguistic and technical. The rest is what you see."

The significant momentum behind the EU AI Act means that the details of the AI risk management practices being mandated will soon have immediate global implications. We have long known that there was broad consensus among the negotiators to accept the proposal to classify AI systems used in employment as "high risk." The posted legislative text confirms the broad scope of agreement on this "high risk" classification, with Paragraph 4 of Annex III specifying the following systems are considered "high risk" systems:

4. Employment, workers management and access to self-employment:

(a) AI systems intended to be used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;

(b) AI intended to be used to make decisions affecting terms of the work-related relationships, promotion and termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics and to monitor and evaluate performance and behavior of persons in such relationships.

Recital 36 specifies some of the reasoning behind this classification. It states:

AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions affecting terms of the work related relationship promotion and termination of work-related contractual relationships for allocating tasks based on individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects, livelihoods of these persons and workers' rights.

Recital 36 also clarifies the intent to classify AI systems used for worker monitoring or performance evaluation as "high risk":

Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also undermine their fundamental rights to data protection and privacy.

The EU AI Act's requirements for "high risk" AI systems are broadly consistent with what watchers have been expecting. Under the EU AI Act, deployers of "high risk" AI systems must conduct a pre-deployment "fundamental rights impact assessment" which, among other things, must assess and document the specific risks of harm to people or groups of people. Other obligations for "high risk" AI systems under the EU AI Act include, but are not limited to:

  • establishing an AI risk management system, understood to be a "continuous iterative process" (Article 9(2));
  • pre-deployment testing of AI systems to identify the "most appropriate and targeted risk management measures" (Article 9(5) and (7));
  • subjecting training, validation, and testing data sets to "appropriate data governance and management practices" that include "examination in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law" (Article 10, 10(2)(f));
  • making disclosures (Article 13) regarding "the level of accuracy, including its metrics, robustness and cybersecurity... against which the high-risk AI system has been tested and validated" (13(3)(b)(ii)); and
  • establishing human oversight (Article 14), including a requirement that high-risk AI systems be designed to be "effectively overseen by natural persons."

Moreover, Article 29 specifically directs deployers of high-risk AI systems that they must "assign human oversight" of the systems to people "who have the necessary competence, training and authority, as well as the necessary support."

While these AI risk management concepts are not new, having them eventually apply with the force of law provides a compelling rationale for employers to assess how their current AI risk-management practices may align with these new requirements. We now know that according to the posted documents, the Act's provisions regarding "high-risk" AI systems will not go into effect until 36 months after the Act's entry into force. (The Act's "entry into force" would occur after its official adoption and subsequent publication in the Official Journal of the European Union.) Even though 36 months may seem like a very long time frame, the Act's approach for regulating AI used in employment processes as "high-risk" sets a precedent that is likely to influence future regulatory efforts around the world.

Additionally, the Act's provisions on expressly prohibited AI applications are set to go into effect in a mere six months after the Act's entry into force. One of the prohibited applications includes "emotion recognition," and Article 5 of the posted draft of the Act clarifies that this categorization includes:

The placement on the market, putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person within the areas of workplace and educational institutions, except in cases where the use of the AI system is intended for medical or safety reasons.

Recital 26(c) explains the reasoning behind this prohibition, noting specifically:

AI systems identifying or inferring emotions or intentions of natural persons on the basis of their biometric data may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons.

While the posted text of Recital 26(c) identifies inferring emotions from "biometric data" as an area of concern, the posted text of Article 5 is not limited just to inferring emotions from biometric data. Given the shorter compliance window for prohibited applications, employers using AI technology that could arguably meet these criteria should continue to pay close attention to this particular requirement as it progresses towards final approval.

In light of the extremely active and ever-evolving AI regulatory environment, employers using any form of AI for their employment processes should incorporate this information as they assess their AI risk management and safety processes.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.