This blog post focuses on Artificial Intelligence (AI) practices prohibited under the EU AI Act. Article 5 of the AI Act essentially prohibits AI practices that materially distort peoples' behavior or that raise serious concerns in democratic societies.

As explained in our previous blog post, this is part of the overall risk-based approach taken by the AI Act, which means that different requirements apply in accordance with the level of risk. In total, there are four levels of risk: unacceptable, in which case AI systems are prohibited; high risk, in which case AI systems are subject to extensive requirements; limited risk, which triggers only transparency requirements; and minimal risk, which does not trigger any obligations.

Some prohibitions are particularly relevant from a business perspective. Others are most likely to be relevant for governments or only apply in the context of law enforcement.

The list of prohibited AI practices is not set in stone. The European Commission will assess the need to amend this list once a year and share its findings with European Union lawmakers (Article 112).

Most Relevant AI Systems Prohibited for Businesses

The AI Act prohibits placing AI systems on the European Union's market, putting them into service, or using them in the European Union to materially distort people's behavior in a manner that causes or is likely to cause them physical or psychological harm:

  • Prohibited Practices. The AI Act prohibits placing on the market, putting into service and using certain AI systems. The use of AI is not defined in the AI Act, but the terms "placing on the market" and "putting into service" refer to specific concepts defined in the AI Act:
    • Placing prohibited AI systems on the European Union's market. A company or an individual places an AI system on the market when it first makes it available in the European Union.
    • Putting prohibited AI systems into service in the European Union. A provider puts an AI system into service by supplying such a system for first use directly to a deployer or for its own use within the European Union for the system's intended purpose. Providers develop AI systems themselves or have them developed and market them, whether for payment or free of charge. Deployers use AI systems under their authority in the context of professional activities. Providers and deployers can be either legal entities or natural persons.
  • Prohibited Systems. The AI Act prohibits placing on the market, putting into service and using the following AI systems:
    • Subliminal, manipulative and deceptive systems. AI systems that deploy subliminal techniques beyond a person's consciousness or purposefully use manipulative or deceptive techniques that materially distort people's behavior by appreciably impairing their ability to make informed decisions. Such systems cause people to make decisions that they would not have otherwise taken, [likely] resulting in significant harm.
    • Exploiting vulnerabilities. AI systems that exploit people's vulnerabilities due to their age, disability, or social or economic situation. Such systems also distort people's behavior, [likely] resulting in significant harm.
    • Facial recognition databases. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
    • Inferring emotions. AI systems that infer emotions of individuals in the areas of workplace and educational institutions, except for AI medical or safety systems.
    • Biometric categorization. AI systems that categorize individual natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Importantly, the processing of biometric data for the purpose of uniquely identifying an individual is subject to strict restrictions under the GDPR. Such processing is prohibited unless one of the limited exceptions applies, such as the data subject's explicit consent.

Prohibited AI Systems Likely to Be Used by Governments

The AI Act prohibits placing AI systems on the European Union's market, putting them into service or using them in the European Union for social scoring or "minority report" scenario purposes.

  • Social scoring. This refers to AI systems used for the evaluation or classification of people based on their social behavior or known, inferred, or predicted personal or personality characteristics. The prohibition applies where such social scoring leads to a detriment or unfavorable treatment
    • in social contexts that are unrelated to the contexts in which the data was originally generated or collected; and/or
    • such detriment or unfavorable treatment is unjustified or disproportionate to people's behavior or its gravity.
  • Minority report. This refers to AI systems used to make risk assessments of individuals to identify or predict the risk that they will commit a criminal offense based solely on their profiling or on assessing their personality traits and characteristics. This prohibition, however, does not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.

Prohibited AI Systems for Law Enforcement Purposes

The AI Act prohibits the use of real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement. The AI Act does not prohibit placing on the market or putting such systems into service. The prohibition applies unless and in addition to specific safeguards, in as far as the use of real-time remote biometric identification is strictly necessary for

  • the targeted search for specific victims of abduction, trafficking in human beings, sexual exploitation and the search for missing persons;
  • the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons, or a genuine and present or foreseeable threat of a terrorist attack; or
  • the localization or identification of persons suspected of having committed a criminal offense for the purposes of conducting a criminal investigation or prosecution or executing a criminal penalty. This only applies to specific offenses listed in the AI Act and punishable by a custodial sentence or a detention order for a maximum period of at least four years.

Enforcement and Fines

The prohibitions in Article 5 of the AI Act will apply six months from the date of entry into force of the AI Act, currently expected in June or July 2024.

Noncompliance with the prohibition of the AI practices mentioned above is subject to administrative fines of up to €35 million or up to 7% of the company's total worldwide annual revenue for the preceding financial year, whichever is higher.

National market surveillance authorities will be responsible for ensuring compliance with the AI Act's provisions regarding prohibited AI systems. They will report to the European Commission annually about the use of prohibited practices that occurred during that year and about the measures they have taken in this respect.

Finally, the European Commission will develop guidelines for the practical implementation of the AI Act provisions regarding prohibited AI systems.

The authors would like to thank David Llorens Fernandez for his assistance in preparing this alert.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.