As artificial intelligence (AI) becomes less of a curiosity and more of an everyday tool, disputes are increasingly arising over its operation and, when things go wrong, the question inevitably arises: whose fault is this and who's liable? One high-profile example is the ongoing dispute between Hong Kong businessman Samathur Li Kin-kan and London-based Tyndaris Investments, in which Tyndaris is suing its client for $3 million in allegedly unpaid fees. In a countersuit, Mr. Li is claiming $23 million in damages allegedly resulting from Tyndaris' use of algorithmic trading in managing his portfolio.

Tyndaris Case

The dispute centers around whether Tyndaris misled its client as to the AI's capabilities, which means that the AI's performance itself will be adjudicated. Media reports say that Tyndaris' AI would comb through online sources (e.g., real-time news, social media posts, etc.) to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned. Media reports about the case report that back testing – simulating returns based on historical data – suggested that the AI was highly effective at generating significant returns on an original investment. However, when actually tasked with managing a $2.5 billion portfolio, it is alleged that it regularly lost money, including an alleged a $20 million loss in a single day. The legal issues involved are novel as the case is among the first (if not the first) in which the issue is whether an AI's actions give rise to civil liability.

Bases for Liability of AI In Quebec and Common Law Canada

One issue that will presumably come before the court is whether Tyndaris' actions failed to meet the standard required for financial advisors under applicable securities laws. But a broader legal question – and the one analysed below in a Canadian context – would be on what basis a person can be liable for the acts or omissions of AI in the first place.

Setting aside contractual recourse, in the absence of a liability regime specific to AI, a party claiming damages caused by AI might invoke the rules governing what Quebec law refers to as "an act of a thing." The applicable provision, Article 1465 of the Civil Code of Quebec (CCQ), simply states that "the custodian of a thing is bound to make reparation for injury resulting from the autonomous act of the thing, unless he proves that he is not at fault."

Under such an analysis, the court would need to determine who is the AI's "custodian" and whether the AI's actions constitute an "autonomous act of the thing." Although Quebec courts have yet to adjudicate the question, since AI operates without direct, ongoing human intervention its actions could reasonably be considered as "autonomous acts of the thing."

The concept of "custodian" is, however, less clear. The custodian – who need not own the thing – is the person who exercises a power of surveillance, control and direction over it. In this case, the use of AI reportedly resulted from a partnership between Tyndaris and a company called 42.cx, which had developed the AI itself. Tyndaris then used the technology to create the AI-managed investment fund at issue in the litigation.

In Quebec, it may be possible for investors such as Tyndaris' clients – whose portfolios were being managing based on the AI's analysis – to argue that, from their perspective, Tyndaris had surveillance, control and direction over the AI. After all, they bought the service from Tyndaris and presumably had no interaction with any other party in the supply chain.

In reality, however, "control" and "direction" are strong words to apply in an AI context. Tyndaris presumably monitored how the AI managed portfolios, arguably exercising a power of surveillance over the AI. But, as a third party reseller, could it really be said to exercise "control" or "direction" over an algorithm that it has not designed, and of whose inner workings it likely has no intricate technical understanding and the actions of which it cannot predict?

Crucially, Article 1465 merely creates a presumption that the custodian is liable for acts of the thing, a presumption that the custodian can rebut by showing an absence of fault. As a general rule, that burden can be met by showing that the act of the thing was unforeseeable and that nothing could reasonably have been done to avoid it. As a result, AI's sometimes mysterious behaviour might actually provide grounds for relief since even its programmers, who presumably understand the AI best of all, are often unable to foresee exactly how the AI will act. Therefore, it is possible to argue that even if the AI acted in an undesirable way, there was nothing that they could reasonably have done to avoid it. Indeed, outside special circumstances such as AI being designed, trained, sold, etc. with wanton disregard for any harmful effects or total indifference to algorithmic transparency – to a point that it is unduly difficult to explain the AI's behaviour – any custodian may have good arguments as to why it is without fault and therefore should not be held liable for the AI's actions.

It is important to note that Article 1465 is not the only means of bringing a suit over damages caused by AI: other options under the CCQ include the warranty of quality under Article 1726 (available to buyers against sellers, manufacturers and certain other parties in the supply chain) or the general liability regime for fault under Article 1457 (likely difficult, given that the plaintiff must prove the existence of fault). A consumer who has purchased an AI product for personal use might also invoke the Consumer Protection Act, for example sections 37 (requiring that goods be fit for the purposes for which goods of that kind are ordinarily used) as well as 40 and 41 (requiring that goods and services conform to the description of them in the contract and in advertising statements).

Outside of Quebec, the situation appears even muddier. As the common law has no regime specific to acts of a thing, the avenues available to a plaintiff are likely to be similar to those described in the preceding paragraph: a claim under the tort of negligence (the equivalent of Article 1457 of the CCQ), or under statute (primarily laws governing product liability).

For a claim of negligence to succeed, the plaintiff must establish that the defendant owed the plaintiff a "duty of care." While there might be a duty of care in the case of a loss due to a decision made by AI, it is not obvious whose duty it would be. It could not be the AI itself, since the duty has to be owed by someone with legal personality and, under the law, AI is nothing more than some code. As a result, the duty of care would have to be owed by someone else, such as the owner, manufacturer, user or service provider.

Law makers could, one day, elect to give AI legal personality (like a corporation). While that could allow AI to be sued, it would be useless to sue the AI unless it had assets on which to collect damages awards. More practically, it could also mean that persons responsible for the conduct of the AI could also be held vicariously liable for the AI's actions. Vicarious liability is well-recognized in Canadian common law, and most commonly arises in the employment context, in which an employer is held to be responsible for the acts of its employees when the employees are acting in the course of their employment.

AI that has legal personality could even be held criminally liable for its actions, which could give rise to the same considerations as when a corporation is found to be criminally liable. The law creates a legal fiction of a "directing mind", typically a senior officer within the organization who has an important role in setting policy or someone who manages an important part of the organization's activities. Certain individuals, by virtue of the position they hold (e.g., CEO, CFO) will automatically be considered senior officers/directing minds. As a result, if the AI has legal personality and can be found criminally liable, the organization and its directing minds could become parties to offence. Until then, however, even if AI were to be simply considered part of the overall organization, directors and officers could still be found individual liable if they knowingly played a role in the AI's bad decisions.

Takeaways for Business

The Tyndaris dispute is scheduled to go before the London commercial court in 2020. It will be interesting to see how the judge handles the issue. While the ruling will have limited value for Canadian law purposes, the reasoning is likely to be informative since many of the fundamental legal principles are similar between the two countries.

In the meantime, it will remain essential that buyers and sellers of AI products or services remain mindful of how to address liability both between themselves (most importantly, in their contract) and with respect to third parties.

Note: All dollar amounts are in USD
Disclaimer: The description of the facts in the litigation matter described above are based entirely on media reports. We are unable to independently verify the facts as stated by these media reports.

About Dentons

Dentons is the world's first polycentric global law firm. A top 20 firm on the Acritas 2015 Global Elite Brand Index, the Firm is committed to challenging the status quo in delivering consistent and uncompromising quality and value in new and inventive ways. Driven to provide clients a competitive edge, and connected to the communities where its clients want to do business, Dentons knows that understanding local cultures is crucial to successfully completing a deal, resolving a dispute or solving a business challenge. Now the world's largest law firm, Dentons' global team builds agile, tailored solutions to meet the local, national and global needs of private and public clients of any size in more than 125 locations serving 50-plus countries. www.dentons.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances. Specific Questions relating to this article should be addressed directly to the author.