Lawyers will increasingly integrate artificial intelligence (or "AI") within their practice in the future. For now, however, lawyers practicing before the Federal Court of Canada (the "Court") will need to declare when they have used AI to prepare documents—and must follow additional directions provided by the Court regarding the use of AI.

On December 20, 2023, the Court published a practice notice regarding the use of artificial intelligence in Court proceedings by litigants (the "Practice Notice", see here). On the same date, the Court published a document setting out interim principles and guidelines for its own use of artificial intelligence (the "Interim Principles and Guidelines").

The Court's move reflects the growing use of generative artificial intelligence in the legal profession. In both documents, the Court recognizes that adoption of this technology can provide substantial benefits in terms of efficiency, while still emphasizing caution in light of its potential risks.

Generative AI vs Other Types of AI

The Practice Notice specifically addresses the use of generative AI, a term which describes "a computer system capable of generating new content and independently creating or generating information or documents". The Court clearly states that the Notice does not apply to other types of AI which do not generate new content.

This clarification is important and necessary, since "artificial intelligence" is currently used to describe multiple and wide-ranging concepts. However, the use of the term "artificial intelligence" in the Practice Notice primarily focuses on a specific type of generative AI called a "large language model", or "LLM". An LLM is a program that processes written inputs or "prompts" and then uses large amounts of training data to produce an output that mimics human writing.

In contrast to the Practice Notice, the Court's Interim Principles and Guidelines cover the use of a wider range of AI-type technologies, including programs used in the analysis of raw data and the performance of administrative tasks. This broad scope reflects the Court's 2020-2025 Strategic Plan, which announced the Court's plan to explore the use of AI in streamlining its processes (e.g., for online "smart forms") and for aiding in mediation and other types of ADR.

Use of AI by Litigants

The Practice Notice serves three main functions: (1) it mandates the inclusion of a declaration that AI was used to generate content in a Court document; (2) it sets out principles for the use of AI by litigants; and (3) it provides an explanation as to why the Court has published the Practice Notice.

The Notice was published at the end of a year that featured high-profile and embarrassing failures by lawyers who relied on LLMs without exercising proper oversight, including multiple instances in which an LLM was used to generate documents filed with a court.

For instance, on June 23, 2023, District Judge P. Kevin Castel issued an opinion and order sanctioning a litigant and its law firm for the use of an LLM to draft an affidavit, for failing to properly review and verify its contents, and for refusing to withdraw the affidavit once it was questioned by the other party (Mata v Avianca Inc, Case No. 22-cv-1461 (PKC) (SDNY)). The challenged affidavit included a number of citations to decisions which did not actually exist, and which instead had been fabricated (or "hallucinated") by the LLM.

Despite this high-profile American decision, a BC lawyer likewise used ChatGPT to prepare a notice of application. The notice of application contained only two citations, and both referred to case law that did not actually exist. This mistake resulted in significant negative publicity for the lawyer, and an order of costs being made against her personally (Zhang v. Chen, 2024 BCSC 285). In awarding costs against her, the BC Supreme Court (in a decision that cited Mata v Avianca) provided the following comments:

As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers. Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less.

The Court, in providing guidance relating to the use of AI, has adopted an approach that is largely consistent with the observations above, and one which may help avoid similar unfortunate situations in the future. Pursuant to the Practice Notice, if generative AI was used to prepare a document for the purposes of litigation, and submitted to the Court by or on behalf of a party or intervener, that document must include a declaration which discloses that the document contains AI-generated content. The Notice does not apply to Certified Tribunal Records submitted by tribunals or other decision-makers.

This declaration must be made in the first paragraph of the document at issue. The Court also provides an example of such a declaration:

Declaration

Artificial intelligence (AI) was used to generate content in this document.

Déclaration

L'intelligence artificielle (IA) a été utilisée pour générer au moins une partie du contenu de ce document.

In addition, the Practice Notice sets out guiding principles regarding the use of AI, and discusses both the benefits and risks of using AI in the legal profession. In particular, the Court notes that there are ethical and access to justice issues where a lawyer uses AI in circumstances where their client is unfamiliar with the technology. The Court encourages lawyers to provide "traditional, human services" to clients where those clients are unfamiliar with AI, or where such clients do not want to use AI.

The Court also cautions the profession about using legal references and analysis generated by AI, and emphasizes the importance of using "only well-recognized and reliable sources"—thus addressing circumstances such as those at issue in Mata v Avianca and Zheng v Chen, in which AI-generated documents may contain fabricated case law and fake citations. Further, the Practice Notice references the "human in the loop" principle, which explains the necessity of checking AI-generated documents and materials, and notes that such a review is in keeping with standards generally required of legal professionals.

Use of AI by the Courts

The Court's Interim Principles and Guidelines primarily addresses the use of AI for administrative and procedural purposes. For instance, the Court specifically states that it "will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultations"—and notes that this includes using AI in determining issues between parties, as reflected in its Reasons for Judgment and Reasons for Order.

Overall, the Court attempts to balance its potential use of AI against the potential adverse impact that use of this technology may have on judicial independence and public confidence in the administration of justice. It also sets out seven principles which will guide its use of AI:1

  • Accountability: The Court will be fully transparent to the public for any potential use of AI in its decision-making functions;
  • Respect of fundamental rights: The Court will ensure that its uses of AI do not undermine judicial independence, access to justice, or fundamental rights;
  • Non-discrimination: The Court will ensure that its use of AI does not reproduce or aggravate discrimination;
  • Accuracy: certified or verified sources and data will be used for processing judicial decisions and data for administrative purposes;
  • Transparency: The Court will authorize external audits of any of its AI-assisted data processing methods;
  • Cybersecurity: Data will be securely stored and managed to protect the confidentiality, privacy, provenance, and purpose of the data; and,
  • "Human in the loop": Members of the Court and their law clerks will verify the results of the AI-generated outputs used in their work.

Conclusion

While AI presents opportunities for significant efficiencies in legal drafting, the Practice Notice and Interim Principles and Guidelines clarify that humans still bear the ultimate responsibility for what is contained in their court documents.

The documents have two predominant themes which reflect this responsibility: transparency, in which all parties are given notice that AI has been used in the preparation of a document or as part of an administrative process, and human review, in which outputs of AI must always undergo human verification.

While policies will likely develop in the near future as AI technology advances, the Court has offered the profession a means of navigating the implementation of this technology—at least for now.

Footnote

1. These seven points have been paraphrased and summarized, in part, through the use of an LLM which was then subject to human review.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.