With generative AI's ability to aid knowledge management, increase efficiency and accelerate development there must be balanced consideration of intellectual property (IP) protection and stakeholder interests.

Generative Artificial Intelligence (Gen AI) are algorithms that can be used to generate text and images that are difficult to distinguish from human generated text and images. It is technology that is fed data (trained) to ultimately recognize relationships and patterns in data. The more data the system is fed, the smarter it becomes. Once trained, it then applies that intelligence to information submitted by end users, to produce new content/products such as videos, photos, and book summaries. Generative AI's use is growing in popularity because it quickly simplifies and completes tasks for the everyday user once given simple instructions.

Some of the more popular platforms are Open AI's ChatGPT, DALL-E and DALL-E 2.

ChatGPT (Chat Generative Pre-trained Transformer), is a large language model-based chatbot. DALL-E and DALL-E 2 are text-to-image models developed by OpenAI, these platforms use deep learning methodologies to generate digital images.

Given the relatively easy access to these platforms, the use of generative AI continues to grow. According to a recent survey by MIT Technology Review, roughly four out of five business leaders (79%) anticipate their employees will make use of generative AI in completing tasks. Of the business leaders who said they were not currently prepared to adopt generative AI technology, a little over a third (35%) expressed there was a lack of appropriate security, privacy, and trust guardrails in place.1

In light of generative AI's increasing popularity, Canadian companies focused on furthering innovation should consider issues around content creation and attribution, coupled with legal and ethical concerns when exploring the potential of generative AI platforms to accelerate development.

As more AI generated content and products are brought to market, companies will likely wrestle with how to best address today's disruptive environment with employees, investors, business partners and customers. In addressing risks, communication with stakeholders should also entail consultation with legal counsel, where appropriate. This bulletin focuses on the prospective legal implications of increased generative AI adoption across industries. Highlighted here are four key questions to reflect on when using generative AI platforms while navigating intellectual property and commercial considerations.

  1. How does your Business plan to use generative AI platforms?
    A business' day-to-day use is likely to determine what governs platform utilization within the company and what policies may apply, whether that be a privacy policy, an acceptable use policy or a confidentiality policy. For instance, if you engage a generative AI platform provider and create work product using this platform, but the AI platform provider is not subject to any confidentiality obligations, if sensitive personal data is disclosed through the platform, your business may not have recourse against the AI platform provider. Furthermore, the business itself may be exposed to liability for disclosing personal data to a third-party without sufficient safeguards.
  2. Will your Business be taking any steps to monitor the quality of AI generated content?
    Reliability is critical in most industries, however finance and healthcare are particularly sensitive areas, and so what may be strategically expedient may also be legally complicated. Given that there are now documented instances of generative AI producing "hallucinations,"2 which are essentially incorrect responses asserted as fact, measured implementation of generative AI platforms is likely a sound approach. However, upon implementation, processes around output evaluation and monitoring should be put in place to ensure content / product reliability, validity, and quality. Businesses should also ensure that policies encouraging responsible AI use and periodic platform testing, are timely and comprehensive given the likelihood of future regulatory scrutiny.
  3. IP risks while using generative IP platforms
    If a company has concerns about intellectual property or safeguarding trade secrets, while retaining a competitive advantage, it is imperative that employees are made aware that submitting proprietary or seemingly benign information into a generative AI platform, may lead to disclosure to the public including competitors. Though terms of use may vary between each platform and user packages offered on AI platforms, the generative AI provider may have the contractual right to use, create or publish content derived from inputs (prompts) provided by the end user to create new works. Consequently, depending on the type of AI platform at issue and its function, IP owners alleging infringement are likely to face legal ambiguity and arbitrary outcomes. Guardrails a company may consider are 1) limiting/restricting the use of generative AI platforms and 2) consider updating employment policies or issuing guidance obligating employees to comply with AI use policies.
  4. Is your Business preparing to follow regulations related to generative AI?
    Innovation, Science and Economic Development Canada (ISED) has released draft elements of a code of practice for generative AI systems, though specific regulatory guidelines on Artificial Intelligence through the Artificial Intelligence and Data Act (AIDA) are currently pending, the proposed generative AI standards below provide a framework that a pragmatic business leader should consider.

Safety

Safety must be considered holistically throughout the system's lifecycle, with a broad view of potential impacts, particularly regarding misuse.

Fairness and Equity

It will be essential to ensure that models are trained on appropriate and representative data, and provide relevant, accurate, and unbiased outputs.

Transparency

It is important to ensure that individuals realize when they are interacting with an AI system or with AI-generated content.

Human Oversight and Monitoring

Human oversight and monitoring of AI systems are critical to ensure that these systems are developed, deployed, and used safely.

Validity and Robustness

Ensuring that AI systems work as intended and are resilient across the range of contexts.

Accountability

Developers, deployers, and operators of generative AI systems would ensure that multiple lines of defence are in place to secure the safety of their system.

These are high-level details of elements of the code, the second part of this series will provide a comprehensive analysis of the code and its interaction with the Artificial Intelligence and Data Act (AIDA).

Generative AI is here, to minimize disruption business leaders should consider future regulatory and economic implications and plan on implementing reasonable policies.

Footnotes

1. https://www.technologyreview.com/2023/07/25/1076532/generative-ai-is-empowering-the-digital-workforce/

2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9939079/

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.