Since the launch of ChatGPT, an artificial intelligence chatbot, in November 2022, businesses and educational institutes have grappled with the use of artificial intelligence. While privacy and confidentiality concerns have been at the forefront of discussions, there is increasing awareness of the chatbot's affinity for inventing sources, references and citations.

Last Friday, June 23, 2023, in the first of its kind in Canada, the Court of King's Bench of Manitoba issued a practice direction on the Use of Artificial Intelligence in Court Submissions:

With the still novel but rapid development of artificial intelligence, it is apparent that artificial intelligence might be used in court submissions. While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence. To address these concerns, when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.1

(emphasis added)

The issuance of this practice direction follows a now widely reported incident in the United States, in which the District Court of the Southern District of New York was recently "presented with an unprecedented circumstance" - the use of artificial intelligence in court submissions which contained "citations to non-existent cases".2 The Court noted that "[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations".3 The Court required the lawyer to "show cause" as to why "he ought not to be sanctioned."4

On June 22, 2023, following the "show-cause" hearing, the District Court released a scathing decision sanctioning counsel and his law firm, holding that "[a]n attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system"5 and making a finding of bad faith:

In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings. [The Respondents] abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.6

Given the public's increasing awareness of ChatGPT's propensity to fabricate information, it is not surprising that a Canadian court has now issued a practice direction to counsel on the use of artificial intelligence. While the Court of King's Bench of Manitoba does not explicitly prohibit the use of artificial intelligence in preparing court submissions, it does require lawyers to indicate how artificial intelligence was used in preparation of materials.

We anticipate that other Canadian courts will quickly follow suit, implementing similar practice directions.

Footnotes

1 Practice Direction re Use of Artificial Intelligence in Court Submissions dated June 23, 2023.

2 Order to Show Cause dated May 4, 2023.

3 Order to Show Cause dated May 4, 2023.

4 Order to Show Cause dated May 4, 2023.

5 Opinion and Order on Sanctions dated June 22, 2023.

6 Opinion and Order on Sanctions dated June 22, 2023.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.