After months of negotiations, the European Parliament and Council have reached a political agreement on the EU's Artificial Intelligence Act (“AI Act”). The AI Act is a landmark in global AI regulation, reflecting the EU's objective to lead the way in promoting a comprehensive legislative approach to support the trustworthy and responsible use of AI systems. The AI Act follows other major EU digital legislation, such as the General Data Protection Regulation (GDPR), the Digital Services Act, the Digital Markets Act, the Data Act, and the Cyber Resilience Act.

The EU AI Act aims to guarantee the safety of AI systems within the EU market and establish legal assurance for investments and innovation in AI. It seeks to minimise risks for consumers and reduce compliance costs for providers. The legislation adopts a prominent risk-based approach, categorising AI systems into four distinct risk classes, each addressing various use cases. While outright bans apply to certain AI systems, with limited exceptions, the EU AI Act imposes specific responsibilities on providers and users of high-risk AI systems. These obligations encompass testing, documentation, transparency, and notification requirements.

Risk-based Approach to AI Regulation

Fundamentally, the EU AI Act will implement a risk-based strategy, categorising AI systems into four distinct risk levels based on their intended use:

(1) unacceptable risk,

(2) high risk,

(3) limited risk, and

(4) minimal/no risk.

The primary emphasis of the Act is expected to be on AI systems falling under unacceptable-risk and high-risk categories. Both of these risk classes have been extensively addressed in amendments by the EU Parliament and Council, as well as in the trilogue negotiations.

AI systems posing an unacceptable risk by violating EU values and representing a clear threat to fundamental rights will be prohibited in the EU according to the political agreement. The EU AI Act is set to ban:

  • biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases;
  • emotion recognition in the workplace and educational institutions;
  • social scoring based on social behaviour or personal characteristics;
  • AI systems that manipulate human behaviour to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation); and,
  • certain applications of predictive policing.

While biometric identification systems will be prohibited in principle, an agreement has been reached to allow limited exceptions for their use in publicly accessible spaces for law enforcement purposes. However, such usage is permitted only after obtaining prior judicial authorisation and is restricted to the prosecution of a narrowly defined list of crimes. Additionally, the deployment of post-remote biometric identification systems is reserved exclusively for the "targeted search of a person convicted or suspected of having committed a serious crime." Real-time biometric identification systems must adhere to stringent conditions, and their usage will be "limited in time and location, for the purposes of:

  • targeted searches of victims (abduction, trafficking, sexual exploitation);
  • prevention of a specific and present terrorist threat; or
  • the localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime)".

Second, certain AI systems with "significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law" will be classified as high-risk, including:

  • "certain critical infrastructures for instance in the fields of water, gas and electricity";
  • "medical devices";
  • "systems to determine access to educational institutions or for recruiting people";
  • "certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes"; and
  • "biometric identification, categorisation and emotion recognition systems".

For AI systems categorised as high-risk, there will be comprehensive mandatory compliance requirements encompassing risk mitigation, data governance, detailed documentation, human oversight, transparency, robustness, accuracy, and cybersecurity. High-risk AI systems will undergo conformity assessments to evaluate their adherence to the Act. An emergency procedure is outlined, permitting law enforcement agencies to deploy a high-risk AI tool that hasn't passed the conformity assessment in urgent situations. Moreover, mandatory fundamental rights impact assessments are stipulated, and citizens have the right "to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights," as outlined in the political agreement. The agreement also includes provisions for regulatory sandboxes and real-world testing, facilitating the development and training of AI systems before their introduction to the market.

AI systems categorised as limited-risk, which encompass chatbots, specific emotion recognition and biometric categorisation systems, as well as systems generating deepfakes, will be subject to less extensive transparency obligations. These transparency requirements will involve, among other things, notifying users that they are engaging with an AI system and indicating synthetic audio, video, text, and image content as artificially generated or manipulated, both for users and in a machine-readable format.

Lastly, any AI systems that do not fall into the three primary risk classes, including AI-enabled recommender systems or spam filters, are categorized as minimal/no-risk. The EU AI Act permits the unrestricted use of minimal-risk AI systems, and the promotion of voluntary codes of conduct is encouraged.

Safeguards for GPIA Models

To address the rapid proliferation of general-purpose AI models (GPAI models), such as OpenAI's ChatGPT and Google's Bard, the AI Act also addresses the potential systemic risks that may arise from these GPAI models. These advanced models and systems will be regulated through a separate tiered approach, with additional obligations for models posing a “systemic risk”. These include the obligation to perform routine model evaluations and to conduct adversarial training of said models to better understand their strengths and weaknesses and report serious incidents.

Enforcement and Penalties

It is expected that the EU AI Act will be primarily enforced through national competent market surveillance authorities in each Member State. Additionally, a European AI Office – a new body within the EU Commission, will take up various administrative, standard setting and enforcement tasks, including with respect to the new rules on GPAI models, to ensure coordination at European levels. The European AI Board, comprised of member states' representatives, will be kept as a coordination platform and to advice the Commission.

Fines for violations of the EU AI Act will depend on the type of AI system, size of company and severity of infringement and will range from:

  • 7.5 million euros or 1.5% of a company's total worldwide annual turnover (whichever is higher) for the supply of incorrect information;to
  • 15 million euros or 3% of a company's total worldwide annual turnover (whichever is higher) for violations of the EU AI Act's obligations; to
  • 35 million euros or 7% of a company's total worldwide annual turnover (whichever is higher) for violations of the banned AI applications.

Notably, one outcome of the trilogue negotiations is that the EU AI Act will now provide for more proportionate caps on administrative fines for smaller companies and start-ups. Furthermore, the EU AI Act will allow natural or legal persons to report instances of non-compliance to the relevant market surveillance authority.

When will the AI Act take effect?

Entry into force is expected between Q2 and Q3 2024, with prohibitions being enforced after six months of that date. Some GPAI obligations may come into force after 12 months, however, the details are still to be officially confirmed. All other obligations will apply after 24 months.

What actions should companies take from the outset?

  1. Inventory all AI systems you have (or potentially will have) developed or deployed and determine whether any of these systems falls within the scope of the AI Act.
  2. Assess and categorise the in-scope AI systems to determine their risk classification and identify the applicable compliance requirements.
  3. Understand your organisation's position in relevant AI value chains, the associated compliance obligations and how these obligations will be met. Compliance will need to be embedded in all functions responsible for the AI systems along the value chain throughout their lifecycle.
  4. Consider what other questions, risks (e.g., interaction with other EU or non-EU regulations, including on data privacy), and opportunities (e.g., access to AI Act sandboxes for innovators, small and medium enterprises, and others) the AI Act poses to your organisation's operations and strategy.
  5. Develop and execute a plan to ensure that the appropriate accountability and governance frameworks, risk management and control systems, quality management, monitoring, and documentation are in place when the Act comes into force.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.