Generative artificial intelligence (AI) tools such as ChatGPT are being increasingly used by employees for work purposes, whether that is with or without employer knowledge. However, AI tools are not (yet) above creating false information - who could be liable for the serious harm suffered as a result of publishing that information?

While we await specifics on AI regulation in the UK, employers should take steps now to devise internal policies on the use of generative AI in the workplace which properly factor in the risks faced by users republishing false output data about third parties.

AI tools are known to 'hallucinate', i.e. make up false information or fabricate content to fill in gaps in their knowledge. If a false and defamatory statement created by generative AI is published about an individual or entity, and is likely to cause or has caused serious harm, this begs the question: who could be liable under English defamation laws for the serious harm suffered as a result of that publication?

In practice, the higher legal risk is faced by the users of these tools, who are repeating the false and defamatory statements produced by the system. But could publication be defended in libel? You can find out more in this article, originally published by the New Law Journal (21 July 2023).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.