What has happened?

The European Commission has finally published its much-anticipated proposal for a broad regulation to cover the use of artificial intelligence in the EU. This is a world-first – no other jurisdiction has yet taken a step of this nature, and with an ambitious timeline seemingly in the Commission's sights, it is clear Europe is determined to lead the world in this important area of technology policy.

Why is it important?

Modeled on the familiar European regime for product safety regulation, based around reliance upon "harmonised standards" and the use of the "CE Mark", the European proposals are set to change significantly the way in which companies develop, market and use smart digital technologies in virtually all its forms. 

Given the heavy reliance on data by AI systems, the proposed regulation has also drawn extensively from existing data protection and cybersecurity rules (most notably, the General Data Protection Regulation, but also the NIS Directive), echoing, among other concepts, data processing transparency, data retention limits, implementation of appropriate safeguards to protect data, and data breach notification duties. Given these similarities, the proposal could be labelled the Artificial Intelligence Protection Regulation (AIPR).

Whilst the proposals cover the use of "artificial intelligence," broadly defined, in a wide range of applications, it will be especially important for developers of innovative products deploying digital technologies, especially as the approach to those technologies in products being designed right now will need to comply with these requirements that now loom clearly on the near horizon. 

Although there is much in the proposal that will generate controversy and concerns amongst stakeholders, it is inevitable that regulatory intervention in connection with AI is coming. Whilst many will continue to doubt that such a broad-brush approach is the right way forward and that there is a strong risk of stifling good innovation without sufficient benefit, the fact that there is now a published draft regulation that has prospects of becoming law will help stakeholders focus the discussion and potentially lead to a constructive and workable way forward.

What are some of the key features?

The proposed regulation is some 125 pages long, an indication of its importance and scope. Some of the most eye-catching features are:

  • The definition of artificial intelligence is very broad and will encompass technologies that many may not consider to be true AI. In short, it is software developed using certain broadly defined AI approaches and techniques that can generate outputs "influencing the environments they interact with."
  • Just like the GDPR, the proposed rules will have extraterritorial effect, with obligations extending to providers and users based outside of the EU where, for example, the output produced by the system is used in the EU.
  • As under most EU product safety regulations, the primary obligations are owed by the party placing the system on the market (in this case called the "provider"). Lesser but still onerous obligations are owed by importers, distributors and users of AI systems.
  • The proposal takes a risk-based approach, with a pyramid of requirements depending on the level of risk the AI system poses (unacceptable risks, high-risk, limited risk and minimal risk).
  • At the top of the pyramid, some "artificial intelligence practices" will be banned, which the Commission has characterized as "unacceptable risks." This list includes, for example, AI systems that deploy "subliminal techniques beyond a person's consciousness," exploit the vulnerability of special group of persons such as children, enable social scoring by governments, or allow live remote biometric identification in publicly accessible spaces used for law enforcement purposes unless those systems fall within the scope of the limited exceptions provided by the draft regulation.
  • A number of heightened requirements are proposed for a broad range of "high-risk" AI systems. These may include AI systems performing a safety function in certain products, including, for example, mobile devices, IoT products, robotics and other machinery, toys and medical devices.
  • The demands for certification of high-risk AI systems will be high, including safeguards against various types of biases in data sets, the establishment of acceptable data governance and management practices, the ability to verify and trace back outputs throughout the system's life cycle, provisions for acceptable levels of transparency and understandability for users of the systems, and for appropriate human oversight over the system generally.
  • If "substantial modifications" are made to high-risk AI systems during their life cycle, recertification will be required. 
  • Certain high-risk AI systems will also need to be registered, with specified information about the system stored on a publicly available database.
  • Providers of high-risk AI systems will be required to establish and document post-market monitoring systems to cover the whole product life cycle.
  • Incidents caused by the failure of a high-risk AI system that have, or potentially could have, resulted in serious injury or damage to property must be reported to the authorities within 15 days of the incident.
  • Further requirements imposed for high-risk AI systems are directly inspired from the GDPR requirements, such as the obligation to keep records, implement human oversight, ensure the accuracy and security of the data sets, appoint a representative for providers established outside of the EU, etc. Furthermore, the requirement to build respect for ethical principles and human rights in the design of the AI system has been wrapped in the concept of "Ethical by design," which largely mirrors the GDPR's famous principle of "Privacy by design."
  • At the same time, the rules on personal data are balanced against the need to ensure further innovation opportunities of AI. Thus, the regulation calls for "regulatory sandboxes" to be created, where personal data could be used more freely for purposes of developing AI systems.
  • Certain AI systems that are considered to carry a "limited risk" will be subject to transparency requirements. These include those AI systems intended to interact with natural persons. The transparency requirements are similar to the GDPR transparency obligations, including notification to persons of the existence of the system, and specific notifications if personal data is being used to identify intentions or predict behaviors of persons, or to categorize persons to specific categories, such as sex, age, ethnic origin or sexual orientation.
  • At the bottom of the pyramid, which the Commission describes as "minimal risk," are the majority of AI systems, for example, AI-enabled video games and spam filters. These AI systems will not be subject to significant regulatory interference.
  • Providers of non-high risk AI systems will be encouraged to sign up to voluntary codes of conduct, intended to foster the voluntary application of the mandatory requirements that apply to high-risk AI systems as well as additional requirements (e.g., related to sustainability, accessibility and diversity in development teams).
  • Fines of up to €30 million, or 6% of global annual turnover, are envisaged for certain categories of breaches.
  • A new layer of regulatory supervision is envisaged. A European AI regulator (the European Artificial Intelligence Board) would be created to oversee the regulation, while at the Member State level, national supervisory authorities would be appointed.

What does it mean for companies?

Fundamentally, if systems or products are being developed now that have associated software that performs any sort of safety function, and that software has any autonomous learning or autonomous updating capability, it is likely to be caught by these proposals. This means that the work being done now needs to take into account the requirements that may be imposed in order for that system or product to be marketed. This needs to be factored into the design philosophy for the software.

Companies that are merely using AI systems should take into account the new rules, once adopted. If high-risk AI systems are used, companies will need to put in place a risk management system for ensuring that all risks associated with the use of such AI system within the context of the company's particular business are accounted for. Furthermore, if an AI system is used to assist the company with interacting with its customers, then certain transparency duties will need to be followed, such as clearly indicating to customers, when they are interacting with an AI system.

Furthermore, given the territorial effects of the regulation, the above implications will be equally relevant to companies based outside the EU but develop, supply or use AI systems which nevertheless engage with EU consumers.

What happens next?

Whilst the Commission envisages an ambitiously short time period for full implementation, there is much that will happen in the meantime. The legislation still needs to work its way through the European legislative process, and amendments are likely. There is much in here that will generate discussion and controversy, and there will be opportunities for representations to be made by stakeholders through various avenues.

One notable area put forward in the European Parliament's framework last year that has not been picked up by the Commission was a new strict liability regime for AI causing damage. This is a welcome development, as a new AI-specific product liability regime risked unnecessarily duplicating existing EU rules. However, we do expect to see changes to the EU's current product liability regime to reflect developments in this area.

Being a world first, and bearing in mind the international influence of the European Union, there is every reason to expect that this model, if it is successfully implemented, will be very influential in the approach taken elsewhere in the world. 

There is a lot at stake and a lot of change on the near horizon.

Testing times ahead for smart products and for the future development of AI systems generally.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.