Connected and smart devices – everyday devices fitted with microchips, sensors and wireless communication capabilities – are increasingly connecting people and objects to one another in ever-increasing ways. What has come to be described as the Internet of Things (IoT) has initiated a new digital revolution, where connected and smart products promise to make our lives easier and more efficient – or, at least, smarter.

With the addition of machine learning and artificial intelligence (AI), products we use on a daily basis have the ability to make decisions on their own in increasingly complex situations. This, of course, is the benefit of AI.

In cases where the product or its AI technology, makes decisions and learns from those decisions in order to make further ones, how do we analyze and assign legal liability when things go wrong?

Consider the scenario where a pedestrian suddenly steps out and is struck by a vehicle operated by a human. If a court is satisfied that, under the circumstances, a reasonable driver in the circumstances would not have been able to avoid the accident, can the same be said for an autonomous vehicle? Will an autonomous vehicle be held to a different standard? Or, what if an autonomous vehicle in the same scenario avoids the pedestrian by making the decision to strike another vehicle, which turns out to be carrying several young passengers. To what extent will the autonomous vehicle and its system(s) be responsible? What if the mapping software contained the wrong coordinates and the cameras incorrectly scanned the environment? When was the system in the vehicle last upgraded? Is there a standard for over-the-air updates? What happens if the AI scheduler was to make an appointment directly with the dealer to replace the worn out brakes and had not done so yet?

These questions are no longer hypothetical, and the complexity of AI systems will challenge existing legal frameworks. While there is no specific AI legislation in Canada, traditional products liability imposes liability in three areas:

  • Contract;
  • Under sale of goods and consumer protection legislation; and
  • In tort.

Designers, manufacturers and retailers of AI-driven products must navigate the current product liability system, which will need to catch up with the future of AI and machine learning.

Contract

A manufacturer of AI technologies can be held liable for damages arising from the breach of a condition or warranty contained in the contract. Conditions in a sales contract may be defined as a fundamental obligation imposed on either of the parties, the performance of which is vital to the contract. Certain conditions may be statutorily implied into a contract of sale.

On the other hand, a warranty is a promise or statement of fact about goods that is collateral to the main purpose of the contract of sale, and may be express or implied. The scope and meaning of an express warranty will be determined by the actual words used by the seller in making their promise, while the scope and meaning of an implied warranty is determined by the circumstances of the case, including the seller's conduct.

Designers and manufacturers of AI technologies must ensure any contractual conditions and warranties are met in order to avoid claims for breach of contract. Warranties must be distinguished from conditions in order to determine the potential remedies for a breach. A condition is key to the primary purpose of the agreement and, if breached, will permit a purchaser to cancel or rescind the contract in certain circumstances. On the other hand, a breach of warranty gives rise to a claim for damages, however, it does not give the injured party the right to reject the goods and treat the contract as repudiated. In Québec, a breach of the legal or contractual warranty against latent defects can lead to the cancellation of the sale or a reduction in purchase price.

Legislation: Sale of Goods acts and consumer protection laws

Sale of Goods legislation in the common law provinces imply specific conditions into most contracts of sale. Once a contractual relationship between the buyer and seller is established, the specific conditions that the seller of AI technologies must meet specific conditions are:

  • that the goods are fit for a specific purpose; and
  • that the goods are of merchantable quality.

If an AI-infused product is sold that does not meet these conditions, the seller will be held liable without the plaintiff having to prove fault or negligence. This means the absence of negligence in the product is not a defence.

In the common law provinces, a purchaser who buys an AI-infused product from a retailer can sue the retailer, but not the manufacturer, as there is no contract between the purchaser and the manufacturer. However, this does not mean that a retailer is without recourse against the manufacturer. The retailer can rely on the legislation to sue the party it purchased the product from (as well as every other party in the chain of distribution), ultimately leading back to the manufacturer. One exception is where the manufacturer's promotional materials induced the purchaser to purchase a product. Some courts have held that the requirement of an express contract between the parties was not necessary and that the purchaser could rely on Sale of Goods legislation to sue the manufacturer directly.

Sale of Goods legislation in the common law provinces typically provides that there are no implied warranties or conditions in a contract of sale as to quality or fitness of goods for any particular purpose, except:

  • Where the buyer either expressly or implicitly lets the seller know the particular purpose for which the goods are required, showing that the buyer relies on the seller's skill or judgment; and
  • When the goods are those normally supplied in the seller's course of business.

If these criteria are met, legislation implies into the contract of sale a condition that the goods and AI technologies be fit for the particular purpose for which they are required.

The Canada Consumer Product Safety Act (CCPSA) can also be a source of exposure to manufacturers of products incorporating AI technologies. For example, the CCPSA, which applies to all "consumer products," is meant to address and prevent "dangers to human health or safety that are posed by consumer products in Canada."

Under the CCPSA, it is an offence to label or package a consumer product in a manner that creates "an erroneous impression regarding the fact that the product is not a danger to human health or safety," or that is misleading as to safety certification or compliance with applicable standards. It is also an offence to advertise or sell such a product. These offences should apply to AI technologies.

Designers and manufacturers of AI technologies also need to be aware of specific regulatory requirements that may apply from product to product. For example, the provinces and federal government have jurisdiction over motor vehicles (including autonomous vehicles), while Health Canada has jurisdiction over medical devices. With the proliferation of AI technologies, the layering of legal obligations through specifically targeted regulatory schemes is expected to thicken.

Negligence

Most claims in product liability are based on the tort of negligence, which is likely to remain the focus with AI technologies.

To be successful in a negligence action, a plaintiff must establish (on a balance of probabilities) that a manufacturer was negligent in the design or manufacturing of the product at issue, or that it failed to warn of a danger associated with the product. The plaintiff must also prove there was sufficient proximity between the plaintiff and the defendant to give rise to a legal obligation (duty of care). If a duty of care is established, the court will hold the defendant manufacturer to a standard of care and skill expected from a manufacturer of the product in question.

With products incorporating AI technologies, allegations of negligent design will be front and centre. Design defects generally arise when the product is manufactured as intended, but the design causes malfunction or creates an unreasonable risk of harm that could have been reduced or avoided through the adoption of a reasonable alternative design.

In determining whether the design defect creates an unreasonable risk of harm, courts generally apply a risk-utility test: "Was there a reasonable alternative design that was safer?" This analysis necessarily involves a determination of the state of the knowledge and technology in the industry responsible for the design of the allegedly defective product at the time it was designed, which will be difficult given the futuristic nature of AI technologies.

Nevertheless, in assessing whether there was a reasonable alternative design at the time, the court will consider many factors:

  • The utility of the product and AI technology in question for the public as a whole and to the individual user. This is to be contrasted against the product and AI technology with the alternative design;
  • The likelihood the product and AI technology will cause harm in its intended use;
  • The severity or magnitude of the harm that may be caused by the product and AI technology. The court will be more critical of the design of a product and AI technology with the potential to cause severe injuries;
  • The availability and consequences of adopting the alternative design;
  • The probability and severity of harm that may be present in an alternative design. The overall safety of the product and AI technology must be assessed;
  • The effects of the alternative design on the product and AI technologies' function and cost;
  • The manufacturer's ability to spread any costs related to improving the safety of the design; and
  • Whether the product and AI technology was adequately tested for risks of harm before being sold to the public. (A manufacturer must take steps to identify foreseeable risks involved in the use of its product and cannot use its own lack of testing to argue that the harm was not foreseeable.)

A court will also consider the plaintiff's ability to have avoided injury by careful use of the product. In this case, the manufacturer must be able to point to the plaintiff's misuse of its product to establish that its design was not defective, or it can use this evidence to establish contributory negligence on the part of the plaintiff. That being said, there is likely to be a diminished focus on the uses (and misuses) of the AI driven product by the end user. There will likely be greater focus on the algorithms and design of the underlying software that drives the product, with focus on who is "in control" of the risk connected with operating the AI technology and who benefits from the operation.

Warnings will also likely have a greater influence with AI technologies. If a manufacturer knows, or ought to know, of a danger associated with the use of its product, the manufacturer has a duty to warn (both pre and post-sale) all consumers of the potential danger. By the same token, users of products have a duty to read, and heed warnings and instructions supplied with a product, or bear the consequences of any resulting injuries.

The requirement that warnings must be reasonably communicated may be tough where users of AI technologies have a difficult time understanding how such products operate. With traditional products, manufacturers are urged to use pictorial warnings in addition to appropriate written ones and to ensure any warnings supplied with a product are visible, permanent, clear and unambiguous. This may not make sense with products incorporating AI technologies.

The manufacturer or distributor must also warn of any foreseeable misuse of the product. Where danger is obvious, such as the sharp blade of a knife, a manufacturer has no duty to warn of the danger of injury. Likewise, if a product is only designed for use by a skilled person rather than the general public, there is no need to warn against the danger that should be obvious to such a skilled person. With AI technologies this raises the question – is it possible to communicate the nature of such systems and how things may go wrong to users? Again, AI technologies may prove challenging to fit within these parameters.

Manufacturers and distributors not only have an ongoing duty to inform users of all known defects or dangers associated with a product, but they must also warn them where there is reason to suspect that there is a danger associated with the use of the product.

Where a manufacturer or distributor becomes aware of a danger in using the product, the courts have imposed on them a high standard to devise a program to alert owners about the potential danger. Generally, post-sale warnings to customers about defects must contain clear language bringing the danger to the customer's attention and must clearly advise the customer to stop using the product.

Accordingly, failure to act early in initiating a public warning campaign could result in the manufacturer or the distributor being liable for any injuries caused as a result of the suspected defect.

Privacy, personal information and the use of AI

In addition to the current product liability framework, those involved in designing and manufacturing products incorporating AI technologies must keep in mind updates to Canada's privacy legislation.

The Canadian government recently introduced Bill C-11, setting out the Digital Charter Implementation Act, 2020, which aims to strengthen privacy protections for Canadians in the digital landscape.

With respect to AI specifically (styled as "automated decision systems" in the Bill), the legislation creates new transparency requirements over the use of personal information, requiring organizations that use AI to provide, in plain language, a general account of the use of such a system to make predictions, recommendations or decisions about individuals that could have significant impacts on them.

Providing a general account of certain AI technologies (such as deep learning systems and other neural network style architectures) may be challenging, but at least the legislative drafting allows for the possibility that manufacturers and others can use simplified elucidations where further detail would be confusing. The proposed legislation, however, also provides that in relation to specific decisions, individuals will be able to request that organizations explain (also in plain language) how a prediction, recommendation or decision was made by an automated decision system and how the personal information used was obtained.

While the intended target of this latter provision is to ensure that individuals affected by an AI decision understand how their personal information factored into a decision, it is broadly drafted and the phrase "personal information" is given a large and liberal reading by Canadian courts. In consequence, there would be a wide variety of circumstances in which manufacturers and other businesses would be obliged to provide a plain language explanation of how a decision was reached, even where processing personal information as such is not central to the function of the system. It is not yet clear how this explanatory challenge could be met.

This new legislation also contains private right of action provisions if businesses are found to be in breach of the legislation. Accordingly, in addition to product liability considerations, manufacturers and businesses will also have to keep in mind that individuals affected by the organization's conduct could seek damages for loss or injury. If the proposed legislation is enacted, revisions to privacy and data protection practices will likely be required to ensure compliance with Canada's privacy laws. Manufacturers and others should also diligently eliminate the use of personal information by AI technologies wherever possible, which (in addition to satisfying the general privacy law requirement to use only as much personal information as is necessary), will also minimize the ways in which those technologies may be caught up by the requirements of Canada's privacy laws.

Hey AI – where do we go from here?

Laws and regulations often have a difficult time keeping pace with the speed of technology. The absence of any specific legal or regulatory provisions addressing AI technologies will leave many questions unanswered. But that will not slow the technology, and the current product liability legal framework will soon be put to the test.

Manufacturers and designers should consider the following:

  • Obtaining contractual clarity amongst one's partners (e.g., across various licence agreements, subscription agreements, terms of service agreements, cloud computing service contracts), will assist to identify legal obligations and responsibilities;
  • Careful review of any representations and warranties made within a governing contract. Similarly, advertising and claims about what the smart and connected device, as well as its AI technologies promise, should be considered to avoid any misrepresentations;
  • Defence and indemnity agreements should be reviewed to understand their scope. In the event that there is a risk not covered by the indemnification provision, contracting parties should consider insuring against such specific potential losses through the insurance arrangements;
  • Warnings and instructions may become more demanding with the use of AI technologies and must be rethought to ensure they are clear and explicitly set out;
  • Product and software updating will be critical to ensure that AI technologies continue to perform throughout their life cycle. Constant operating and monitoring, including testing and field performance review, will also be crucial, as is over-the-air updating; and
  • Proactive review of privacy and data protection practices to ensure compliance with Canada's next generation of privacy laws. Manufacturers and others should also diligently eliminate the use of personal information by AI technologies wherever possible, thereby minimizing the ways in which those technologies may be caught up by privacy law requirements.

Takeaways

Regardless of how well a product is designed, manufactured or distributed, the threat of litigation is always present. Designers and manufacturers of AI technologies will find themselves faced with allegations that a product was defectively designed or manufactured, that warnings with regard to usage or explanations of product behaviour were inadequate, or sometimes a combination of these. This is why it is imperative that designers and manufacturers understand how to proactively eliminate risks, where possible, and how best to defend their products when faced with lawsuits, whether they are individual claims or class actions.

Originally Published by Borden Ladner, March 2021

About BLG

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.