This article was originally published in Bloomberg Law. Reprinted with permission. Any opinions in this article are not those of Winston & Strawn or its clients. The opinions in this article are the authors' opinions only.

On March 13, 2024, three years since the proposal by the European Union (EU) Commission (Commission), the lawmakers in the European Parliament approved the Artificial Intelligence Act (AI Act), the first comprehensive regulation on AI in the world, with an overwhelming majority of 523 votes in favor, 46 against and 49 abstentions. This approval moved the AI Act towards its final endorsement and entry into force.

The AI Act aims to find a balance between the right kind of regulation that still boosts innovation. As such, it focuses on ensuring that AI systems and models marketed within the EU are used in a way that is ethical and safe and respects EU fundamental rights, while at the same time allowing for strengthening uptake, investment and said innovation.

The purpose of this FAQ document is to give an overview of the enforcement of the AI Act and enable companies to ask the right questions to prepare for its arrival.

1. WHAT TYPES OF ENTITIES DOES THE AI ACT APPLY TO & HOW?

The AI Act, Article 3, defines "artificial intelligence system" as "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

All entities involved in providing, manufacturing, supplying, distributing or deploying AI systems and models—whether they are companies, foundations, associations, research laboratories, or any other legal entities—operating within or outside the EU, are required to adhere to the regulations outlined in the AI Act if the AI system is placed on the EU market or its use affects people located in the EU.

However, the AI Act does not apply to military AI systems nor to AI systems used for the sole purpose of scientific research and development. Furthermore, it does not apply to open-source AI if it is not banned or classified as high risk.

The new rules of the AI Act will apply in the same way across all EU member states through a framework based on four different levels of risk with varying degrees of regulation depending on the level of risk:

  • Unacceptable-risk AI systems are AI systems considered a clear threat to the fundamental rights of people. This includes AI systems or applications that manipulate human behavior to circumvent users' free will. Examples include emotion recognition in workplace and schools and predictive policing. Such AI systems will be banned.

  • High-risk AI systems, broadly defined, include certain critical infrastructures; medical devices; systems to determine access to educational institutions or for recruiting people; certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes; and biometric identification. Such AI systems will be required to comply with strict requirements—including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity.

  • Minimal-risk AI systems are AI-enabled recommender systems or spam filters. These AI systems will not be subject to any additional legal obligations, as they present only minimal or no risk to citizens' rights or safety.

  • Specific-transparency-risk AI systems refer to AI systems such as chatbots and deepfake generators. Such AI-generated content will have to be labeled as such, and users will need to be informed when biometric categorization or emotion recognition systems are being used.

It should be noted that most of the AI systems that known today fall into the last two categories (i.e., minimal and specific-transparency risk).

2. CAN US COMPANIES BE SUBJECT TO THE AI ACT?

Yes, like the General Data Protection Regulation (GDPR), the AI Act applies not only to European organizations but also to organizations outside Europe. U.S. companies, along with any other non-EU companies, can be subject to the AI Act if they are providing AI in products and/or services that are directed for EU consumers. Both providers and users (whether development, introduction, sale, distribution, or utilization) situated outside the EU must abide by the AI Act if outcomes of their AI systems are used within the EU. There is no turnover or user threshold for applicability. This is very similar to the global reach of the GDPR.

3. HOW IS THE AI ACT ENFORCED?

The enforcement of the AI Act will likely create complexity between various authorities throughout the EU since it will involve several authorities at two levels: the EU and the member states.

First, at the EU level, the European AI Office (AI Office), established in February 2024 within the Commission, will oversee the AI Act's enforcement and implementation within the member states and will ensure coordination at the EU level. Along with the national market-surveillance authorities, the AI Office will be the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point. It should be noted that the AI Office will also supervise the implementation and enforcement of the new rules on general-purpose AI (GPAI) models. The AI Office will also cooperate with the EU AI Board (AI Board)—newly established—which will mainly advise on implementation of the AI Act, coordinate between national regulators, and issue recommendations and opinions in a manner like the function of the European Data Protection Board (EDPB) under the GDPR. In a manner like the EDPB, the AI Board will be composed of representative member states. Finally, the Commission will implement a scientific panel of independent experts to support the enforcement activities under the AI Act—in particular, alerting the AI Office of possible systemic risks of GPAI models.

Second, at the member-state level, the Commission has highlighted the key role of national authorities in monitoring the application of the AI Act and handling market surveillance activities. Thus, each member state will have the obligation to establish or designate at least one notifying authority and one market surveillance authority ensuring the application and implementation of the AI Act.

4. WHEN DOES THE EU AI ACT GO INTO EFFECT?

The AI Act will become law on the 20th day after publication in the EU Official Journal.

After this publication, the AI Act will become fully effective two years after its entry into force; however, prohibitions will already apply after six months, while the rules on GPAI will apply after 12 months. EU Codes of Practice should be ready nine months after the Act takes effect. Obligations for high-risk AI systems will apply after three years.

To bridge the transitional period before the AI Act becomes generally applicable, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.

One of the primary challenges will be to see whether the AI Act will stand the test of time, considering the rapid evolution of AI technology.

5. WHAT ARE THE PENALTIES OF THE EU AI ACT?

The AI Act states significant fines for violations of its terms.

  • Companies that are noncompliant with the prohibition of the AI practices will be fined up to €35 million or 7% of global annual turnover (whichever is higher).

  • For violation of GPAI obligations or noncompliance with enforcement measures, companies may be subject to fines of up to €15 million or 3% of global annual turnover for violations of other obligations (whichever is higher).

  • Providing incorrect, incomplete, or misleading information to notified bodies or national competent authorities in reply to a request will be subject to a fine of up to €7.5 million or 1% of global annual turnover (whichever is higher).

  • More-proportional caps are foreseen for administrative fines for small and medium enterprises and startups in case of infringements of the AI Act.

In order to harmonize national rules and practices in setting administrative fines, the Commission will draw up guidelines.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.