Introduction

Artificial Intelligence ("A.I.") is rapidly advancing, and with it comes evolving (and real life) legal and ethical challenges. The European Union's proposed "AI Act" (the "Act") was first proposed in April 2021 and its aim is to be the world's first comprehensive AI law and establish a legal framework to regulate AI systems, ensure their ethical use, safeguard fundamental rights, and promote trust and transparency for its users. On 14 June 2023, the European Parliament adopted its position on the Act, with a clear majority (499 to 28) in favour of its enactment. This follows the position of the Council of the European Union, who, on 6 December 2022, adopted its general approach to the Act and called for promotion of safe A.I. that respects fundamental rights at its core.

The timeline for finalisation and implementation of the Act is uncertain and could fall into the middle of 2024 (with an 18-month transposition period thereafter). Given the complexity of the subject and the need for thorough consideration of potential impacts and societal concerns, it is hoped the implementation of the Act will match the rapid pace at which AI systems are evolving.

This note will provide a starting point to unravelling these complexities by focusing on the AI Act in its current form, examining some provisions concerning generative AI (particularly large language models ("LLMs") such as OpenAI's GPT -3/4), and exploring what potential impact the provisions will have for the businesses looking to use 'general purpose' LLMs as part of their operations now and into the future.

AI Act and its components: Explained

"AI" means different things to different people. For some, AI is about artificial life forms that will (and many believe have already) surpass human intelligence, and for others, nearly any method of data processing technology can be labelled "AI". The Act itself will adopt as broad and far reaching a definition as possible and will aim to be, according to the proposal itself, "as technology neutral and future proof as possible, taking into account the fast technological and market developments related to AI".

The Act identifies four risk categories for AI systems, based on the level of potential harm they may cause. The compliance obligations will be guided by the category in which the AI systems falls into.

  • Unacceptable risk: AI systems considered to pose an unacceptable risk to individuals' safety, rights, and livelihoods. This category includes AI systems used in critical infrastructure, essential public services, law enforcement, and migration.
  • High risk: AI systems that are likely to cause substantial harm or affect people's fundamental rights. This category covers AI used in areas like employment, education, access to essential public services, and law enforcement. Examples include AI-based hiring tools, school grading systems, credit scoring, and facial recognition in public spaces and will generate significant compliance challenges for providers.
  • Limited risk: AI systems with potential risks but not falling under the high-risk or unacceptable categories. This includes AI systems used in non-critical public services or private sector applications that may have an impact on an individual's rights or safety, albeit to a lesser degree.
  • Minimal or No Risk Systems: AI systems that don't fall into any of the above will not subject to compliance.

Generative AI refers to a type of AI system that can generate human like content, such as image, music or text, based what it has learned from vast amounts of data. It uses algorithms to understand and mimic the patterns in the data and create new content that is similar, and sometimes better (ask those who are affected by the writers' strike in Hollywood), to what humans would produce.

These generative AI systems are exemplified by LLMs such as Open AI and ChatGPT. LLMs are a specific type of generative AI model that focus on natural language processing. They are trained on vast amounts of text data and are capable of generating human like text, answering questions and even engage in conversations.

How the A.I. Act (in its current form) would affect LLMs

Generative AI, as it is currently defined under the Act, falls into the "Limited Risk" category. The compliance obligations focus in on transparency for the user. Notable provisions include:

  • Data governance: The Act places an emphasis on the importance of using high-quality and diverse training data to avoid discriminatory output. Anyone who wishes to develop generative AI in the future, therefore, must integrate rigorous data governance practices to ensure compliance with the Act. What "compliance" will look like in real (i.e. non-AI) terms is not yet clear.
  • Transparency requirements: Generative AI systems will need to provide explicit information about their artificial nature to users. Users should be aware that they are interacting with an A.I. system and not a human. In a practical sense, does this mean that if a business generates a letter through A.I., they will need to mention on the letter "this letter was generated with the help of Artificial intelligence" as yet, the practicalities have not been discussed and will need to be considered in the future.
  • Accuracy and reliability: The Act outlines that generative A.I. should be monitored and tested regularly to ensure accuracy and reliability of the generated outputs. Businesses will need to implement measures to detect and correct errors, while also ensuring clear accountability for any misleading, incorrect or harmful content produced. Notably however, despite the European Parliament recommending that consideration must be given to assessing patent law in light of the development of AI, the Act does not specifically address any such issues that may arise.

The Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can also result in fines.

What the (current form) A.I. provisions will mean for businesses and its users

The focus of this article is on the use of general-purpose AI in a general sense within organisations and is not industry-specific or purpose-specific AI. As such, some key issues present themselves from examining the Acts provisions on generative AI in their current form and are highlighted below;

  • IP Ownership: Ownership of the output from generative AI has not been dealt with by the Act or in existing copyright law. Whilst it's not possible to consider the potential application of copyright rules to AI in any detail here, suffice to say that if ownership of IP rights in content, products or services generated by your business is important, using AI tools as part of the development process should be carefully considered and, if necessary, steps taken to mitigate risks, such as documenting the creative process, developing and implementing employee/contractor policies on AI use and including specific, appropriate provision in customer contracts (further on each below).
  • IP Infringement: Equally, businesses will need to be wary of the potential implications of developing and/or publishing anything created using generative AI. A business could breach third party IP rights by using a generative AI system trained on third party copyright. "Pointing the finger" as it may, doesn't become any easier off the back of a breach of this nature as although the Act provides for an obligation on providers of general purpose AI to disclose when they collect and use copyrighted material to train the system, this doesn't mean that your business would not be exposed to a breach claim when using that system for its own purposes.
  • Commercial Contracts: If a supplier introduces AI-related provisions into their contracts, should the customer seek to be indemnified in the event of any loss suffered as a result of the use of AI in providing the service? Equally, if a service provider is intending on utilising AI (whether general purpose or otherwise) as part of its provision of services to customers, its contract will need to address matters such as the customer's agreement to the use of AI ownership of AI-generated output, liability for AI-generated content. From the customer's perspective, matters such as compliance with laws such as the Act and GDPR by the service provider, human oversight and liability for AI-generated content will be priorities. For more, see a previous article on this topic here.
  • AI-Specific Policies: Although there may be benefits to using AI as part of your business, there are also clear risks, arising not least from the impending requirements of the Act on providers and users. Business risks and benefits must be balanced, something which should commence by way of the development and implementation of a policy to govern and manage the use of general purpose AI in a business. Such a policy would need to address matters such as specific areas in which general purpose AI may be used, how requests to use generative AI should be made, specific business protection measures such as recording individual use instances, what existing business data can be inputted, protecting IP/personal data, etc. In short, governance of AI use must be a critical part of a business's risk management and compliance practices.
  • Audit Requirements: The tiered approach to the risk categories will require businesses to carry out an internal impact assessment to determine the level of risk associated with any AI system they intend on implementing. If for example, an AI system is used in the recruitment process, this would fall into the high-level risk category. Given the implications of using AI in decision making in an area such as this, there are layers of regulatory requirements to be met within the Act.
  • Data Protection: Don't forget, in May 2018 the EU also led the way in developing a comprehensive framework for the protection of personal data, being the GDPR. AI is just one more field in which data protection principles and rules must be applied. In addition, matters such as automated decision-making (particularly relevant to AI systems) are already regulated by the GDPR and this will continue notwithstanding the additional specific requirements of the Act.

Conclusion

The Act introduced by the EU aims to strike a balance between safeguarding fundamental rights by strictly regulating AI systems, ensuring their ethical and transparent use and embracing their creative potential. Sure, navigating this regulatory landscape will be like walking a tightrope while juggling a dozen tennis balls, but hasn't this been done before? (Answer: Yes, by Philippe Petit in 1974). The best way for businesses to ensure that the AI systems they are using (and their suppliers of the AI solutions) are in compliance with the regulations to avoid penalties is to take proactive steps early. Appropriate risk management strategies should be developed and implemented in terms of how AI will be used by businesses in the future in line with the requirements under the Act. Businesses should take a proactive approach in educating themselves to fully understand the capabilities of generative AI and how to use it responsibly. This will be critical as the technology grows both more advanced and more commonplace.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.