The UK Government has published its response to the public consultation on its March 2023 AI White Paper, which set out the UK's proposed regulatory framework for AI. Twelve months on from the White Paper, where are we now? Overall, the Government continues to believe that a non-statutory, context-based approach to regulating AI is the best way forward because it offers "critical adaptability" but acknowledges that the risks posed by general-purpose AI could still fall through the cracks. In this briefing we outline where the Government has reinforced its commitment to the approach in the White Paper, as well as areas over which large question marks still hang.

  • A quick recap: the approach set out in the AI White Paper
  • More of the same (with a little extra funding)
  • ...but will this approach work for general-purpose AI?
  • The AI Copyright Code impasse
  • The issue of liability

A quick recap: the approach set out in the AI White Paper

By contrast to the approach adopted by the European Union, the member states of which unanimously approved the final text of the EU AI Act earlier this month, the UK Government proposed to rely on the existing regulatory framework to address the risks posed by AI, with regulators being guided by five cross-sectoral principles to be interpreted and applied within their respective sectoral remits.

The five AI principles

  1. Safety, security and robustness.

  2. Appropriate transparency and explainability.

  3. Fairness.

  4. Accountability and governance

  5. Contestability and redress.

For further information regarding the White Paper, please see our recent briefing here.

More of the same (with a little extra funding)

The consultation response largely reaffirms the Government's commitment to this approach: it remains determined "not to rush to regulate AI". Despite strong support in the consultation response for putting the five principles on a statutory footing, the Government has no plans to do so, wishing to retain flexibility. The focus instead is on how its plans will be implemented in practice. Most notably, the Government has asked several regulators (including, the Information Commissioner's Office, the Financial Conduct Authority and the Competition and Markets Authority (CMA)) to publish an update outlining their strategic approach to AI by 30 April 2024.

What can we expect to see in the guidance?

Regulators are encouraged to include in their guidance:

  • An outline of the steps they are taking in line with the expectations set out in the White Paper.

  • An analysis of AI-related risks in the sectors and the activities they regulate and the actions they are proposing to take to address these.

  • An explanation of their current capability to address AI as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures and skills in place.

  • An indication of their plans and activities in the coming 12 months.

This would seem to indicate that UK businesses and other stakeholders can shortly expect to receive some much welcome guidance regarding the manner and means by which regulators will seek to apply the UK's legal and regulatory framework to AI, albeit that the core message remains the same – when deploying AI, UK businesses should continue to be mindful of their existing legal and regulatory obligations.

The consultation response also expands upon the Government's plan for a new central function to "monitor and assess risks across the whole economy and support regular coordination and clarity" and the steps it intends to take to analyse and review potential gaps in existing regulatory powers and remits. Specifically, a new "multidisciplinary team" is to be established within the Department for Science, Innovation and Technology (DSIT) to monitor for cross-sectoral issues, and an additional £10 million of funding has been allocated to help regulators develop capabilities to respond to AI that some may currently lack. This resourcing issue was laid bare in the recent House of Lords Communications and Digital Committee report regarding large language models and generative AI, which identified numerous regulators including the CMA and Ofcom as, at the date of publication, having zero dedicated AI governance staff.

...but will this approach work for general-purpose AI?

Signalling a potential change in direction, the consultation response does, however, make a case for "further targeted binding requirements on [developers of] highly capable general-purpose AI systems", which it seems safe to assume would capture organisations such as Open AI and Google DeepMind. The response acknowledges that, if these systems continue to develop at the current rate, voluntary measures may be "deemed incommensurate to the risk". The Government also recognises that if risks posed by general-purpose AI (which are likely to have cross-sectoral impact) are left to regulators to address within their current remits based on existing laws, this could result in these risks not being effectively mitigated.

However, the Government does not want to introduce binding measures "too soon" and the response (perhaps unsurprisingly in a general election year) puts this issue squarely within the "wait and see" box.

The AI Copyright Code impasse

The White Paper also acknowledged the unwavering tension between the extent to which intellectual property rights (specifically, copyright) are being undermined via the unauthorised use of data for the training and deployment of AI and the need for high-quality training material. In the hope of finding a non-legislative solution to this issue, the UK Intellectual Property Office (UKIPO) convened a working group of rights holders and AI developers for the purpose of constituting a voluntary AI copyright code of practice intended to promote investment in creativity, whilst overcoming the barriers faced by AI developers to data mining.

However, it was announced on 5 February 2024 that the code has been shelved after the working group failed to agree on a voluntary framework. This means that responsibility for finding a route through this complex issue has now been returned to DSIT, assisted by the Department of Digital, Culture, Media and Sport. The consultation response indicates that ministers from these departments will now lead a further period of engagement with the AI and rights holder sectors. It is looking somewhat doubtful whether a voluntary code of practice is workable and new legislation (which the UKIPO's summary of the Government's ongoing programme of work in this area, published on 29 June 2023, noted as being a possibility) may be a more realistic outcome.

The issue of liability

A further key issue which remains unresolved is that of liability - specifically, how to fairly and effectively distribute legal responsibility across the value chain, as well as how to identify the point at which one regulator's remit ends and another's begins. The consultation skates over the issue of liability again, admitting that there's "no easy answer".

Even though some regulators can enforce existing laws against the developers of the most capable general-purpose systems within their current remits, the wide range of potential uses means that general-purpose systems do not currently fit neatly within the remit of any one regulator, potentially leaving risks without effective mitigations.

The Government's ongoing analysis of this area and the international conversation around general-purpose AI may lead to the conclusion that - once "understanding of risk has matured" - the introduction of further legislation is unavoidable in order to maintain public trust and mitigate the risks posed by AI. The obvious difficulty will then be developing a law (or set of laws) that successfully treads the fine line between promoting innovation (thereby encouraging investment in the UK) and protecting the public from a range of risks (some of which are known and some of which are not).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.