Recent decisions are shining a light on Artificial Intelligence ("AI") hallucinations and potential implications for those relying on them. An AI hallucination occurs when a type of AI, called a large language model, generates false information. This false information, if provided to courts or to customers, can result in legal consequences.

Reliance on AI Chatbots in Court

In Zhang v Chen, 2024 BCSC 285, the BC Supreme Court considered whether to order costs personally against a lawyer who relied on hallucinated cases. Mr. Chen's lawyer filed a notice of application for parenting time relying solely on two cases. Upon receipt of the application materials, opposing counsel raised concerns that the cases could not be located and requested copies. As a result of this inquiry, it became apparent that the cases did not exist and that they were invented by ChatGPT. The application eventually proceeded without reliance on the two cases.

Ms. Zhang's lawyer sought party and party costs, as well as special costs against Mr. Chen's counsel. Justice Masuhara awarded costs of the application in favour of Ms. Zhang. With respect to whether counsel for Mr. Chen should be personally responsible for costs or special costs, Justice Masuhara accepted that: (a) the lawyer was naïve about the risks of using AI; (b) that there was no intention to deceive or misdirect; and (c) her apology was sincere. He also noted that she withdrew the cases prior to the hearing and there was no real risk of the Court being misled. He concluded that special costs were not appropriate in this case. However, Justice Masuhara provided a cautionary note that citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice.

While Justice Masuhara dismissed the request for special costs, he recognized that as a result of counsel's insertion of the fake cases and the delay in remedying the confusion they created, opposing counsel had to take various steps they would not otherwise have had to take. Justice Masuhara, therefore, ordered that regular tariff costs should be payable by Mr. Chen's counsel personally. In reaching this conclusion, Justice Masuhara cited two Law Society Directives about AI, both of which reminded counsel of their ethical obligation to ensure materials before the Court are accurate. He also cited a recent study on the reliability of AI in the context of legal work:

[38] The risks of using ChatGPT and other similar tools for legal purposes was recently quantified in a January 2024 study: Matthew Dahl et. al., "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models" (2024). The study found that legal hallucinations are alarmingly prevalent, occurring between 69% of the time with ChatGPT 3.5 and 88% with Llama 2. It further found that large language models ("LLMs") often fail to correct a user's incorrect legal assumptions in a contrafactual question setup, and that LLMs cannot always predict, or do not always know, when they are producing legal hallucinations. The study states that "[t]aken together, these findings caution against the rapid and unsupervised integration of popular LLMs into legal tasks."

This study, along with the Zhang decision as a whole, serves as an important reminder that legal professionals need to be very careful when integrating AI into their research and materials.

Reliance on AI Chatbots on Company Websites

Other decisions also provide some important lessons-learned for organizations that use customer-facing AI chatbots. For example, decision-makers have noted that customers should be able to rely on AI chatbots in the same manner that they rely on an organization's website.

Accordingly, if an AI chatbot produces inaccurate information, organizations risk claims being brought against them arising from reliance on that inaccurate information. For example, if an AI chatbot hallucinates a customer-favourable policy, a decision-maker may conclude that an organization is required to honour it.

This serves as a caution to all organizations about the dangers of relying upon AI chatbots without supervision or oversight. To protect themselves, organizations should consider using bold exclusions of liability, and perhaps require express consent to use the chatbot subject to those exclusions, when permitting AI chatbots to interact with customers.

Summary

Whether you are an organization or a legal professional, be mindful of the AI-generated content you rely on – whether from a chatbot or a research tool. Consider putting in place reasonable processes to attempt to confirm that the results of the AI tool are accurate or to obtain appropriate disclaimers.

You can learn more about AI and the law by reading Fasken's bulletins, "Artificial Intelligence - Protecting Privilege with Artificial Intelligence in the (Virtual) Room" and "Bill C-27: Federal Government Releases Amendments to Canada's Proposed AI Law".

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.