On the 13 March 2024, the European Parliament adopted the Artificial Intelligence Act ('AI Act'). As the Act is set to come into force in around May – June 2024, most businesses now have a 2-year period to ensure that they become fully compliant with the Act. In this article we will highlight the purpose, who the act applies to, what the requirements are, what AI practices are prohibited, the regulatory bodies and sanctions and how businesses can prepare.

What is the purpose of the AI Act?

The AI Act is a set of rules constructed to ensure that AI technology is used in a responsible and fair manner within the EU market.

The AI Act seeks to implement:

  • Due diligence obligations during the AI system's development process;
  • tools to validate the accuracy of decisions made; and
  • channels for holding individuals responsible in case decisions are found to be incorrect.

Who does it apply to?

The Act will extend its reach to various players in the AI sector, both within and outside the EU's borders. Its broad scope aims to guarantee responsible and ethical use of AI systems by any organisation or government operating within the EU, regardless of where these systems are created or implemented. Consequently, even companies located outside the EU will be required to follow the Act's regulations if they wish to conduct business in Europe.

What are the requirements?

The AI Act classes AI systems into different risk categories setting out rules based on its potential impact on people and society. The Act groups AI into low-risk and high-risk systems with some AI systems ultimately being banned.

Low-risk AI needs to be transparent, so users know when they're interacting with it.

High-risk AI, with potential to cause significant harm to health, safety, rights, environment, democracy, and law faces strict rules. These include mandatory assessments, conformity checks, data management, EU registration, risk and quality control, transparency, human oversight, accuracy, robustness, and cybersecurity. Examples include medical devices, hiring tools, and critical infrastructure management.

Extensive control and supervision is needed for high-risk AI to ensure compliance.

What AI practices are Prohibited?

In summary, the Act bans AI systems that attempt to manipulate human behaviour, exploit weaknesses, or facilitate government social scoring. It prohibits biometric categorisation, predictive policing, and the creation of databases using facial images sourced from the internet.

In addition, AI systems created for emotion recognition will also be prohibited namely in the workplace, education, and law enforcement.

What are the new Regulatory bodies?

EU institutions have agreed to establish new administrative structures, including:

An AI Office within the Commission, tasked with supervising advanced AI models, setting standards, and enforcing regulations across EU member states, similar to the AI Safety Institutes planned in the UK and US.

A scientific panel of independent experts to advise the AI Office on GPAI models, develop evaluation policies for foundation models, and monitor safety risks.

An AI Board (EAIB) consisting of EU member state representatives, to offer guidance, share strategies and to make sure the AI Act is applied consistently across all EU member states.

Additionally, each EU member state will be required to designate authorities responsible for handling compliance with the proposed AI Act, similar to the authorities required by the GDPR ensuring a clear point of contact for businesses and structure for enforcing the AI Act's requirements.

What are the sanctions?

Non-compliance with the regulations could lead to substantial penalties. These fines could amount to as much as 7% of the worldwide revenue of the organisation responsible for the AI systems, based on the severity of the violation. This represents a notable difference from the GDPR's penalties, being 4% of the global annual turnover.

How can businesses prepare?

Organisations utilising or planning to use AI systems should begin by assessing the impact of these systems by mapping their processes and evaluating their compliance with the new regulations outlined in the AI Act.

It's a good idea to create a proposal for how you will govern AI (rules and guidelines). Make sure this plan fits with what your business wants to achieve and figure out where AI can help reach those goals. This plan should also make sure you're handling data properly to follow the current legislation.

Consider the resources needed to support governance activities, as the demand for AI governance professionals is likely to increase significantly. Making suitable preparations now could be seen as a small price to pay so businesses do not fall foul of the Act and the sanctions regime.

Closing remarks

As EU legislation, the AI Act will not directly impact UK businesses in terms of operating within the UK. However, if your UK based business is operational in the EU this new framework is likely to apply.

Businesses need to bear in mind that the legislative landscape is changing as we enter a new age of technology. It is important for businesses to be pro-active as opposed to reactive, when considering how AI is used and to what extent the AI Act applies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.