For the most part, waiting to speak to a human customer service representative has become a thing of the past. With the advent of artificial intelligence ("AI"), companies have been able to use computer systems in lieu of, or in conjunction with, humans to service their clients. Needless to say, AI has become the bedrock of the modern-day consumer experience. But, what happens when AI gets it wrong? Moffatt v Air Canada, 2024 BCCRT 149, [Moffatt] poses an answer to this question.

Following the passing of a relative, Jake Moffatt ("Mr. Moffatt") went to Air Canada's website and used its chatbot to inquire about bereavement fares. The chatbot—which the court defined as an "automated system that provides information to a person using a website in response to that person's prompts and inputs"— suggested that passengers could apply for bereavement fares after-the-fact, so long as the request was made within 90 days from the ticket issuance date. Mr. Moffatt, relying on the information provided by Air Canada's chatbot, booked his flights. After returning home, Mr. Moffatt contacted an Air Canada representative (the "Representative") to retroactively apply for the bereavement rates as instructed by the company's chatbot. The Representative rejected Mr. Moffatt's application and informed Mr. Moffat that (1) the chatbot was incorrect; and (2) Air Canada did not allow retroactive bereavement fare applications.

Mr. Moffatt, in turn, commenced an action against Air Canada for negligent misrepresentation. Air Canada denied that it was negligent, suggesting that its chatbot was "a separate legal entity responsible for its own actions," and that Mr. Moffatt could have found the correct information about the company's bereavement fare policy had he visited another section of its website, as hyperlinked in the chatbot's response to his inquiry.

The court rejected Air Canada's argument and held that it was reasonable for Mr. Moffatt to rely on the information provided by Air Canada's chatbot. The court noted that Air Canada failed to take reasonable care to ensure the accuracy of its chatbot, and the chatbot itself was a part of the company's website. To that end, Air Canada was ordered to provide Mr. Moffatt with damages.

Key takeaways

Moffatt suggests that (1) companies can be held liable for the representations made by their chatbots; and (2) rather than a separate entity, a chatbot can be viewed as an extension of the company itself. As such, when implementing chatbots and automated systems generally, companies should ensure that the automated systems used are up-to-date with the company's current policies, practices, procedures. Moffatt suggests that failing to do so may open such companies up to liability. Notably, this case suggests that an automated systems' incorrect response to a customer's inquiry, together with a link to the company's current policies, may not be enough to circumvent liability.

As the world embraces AI, companies and consumers alike should be aware that automated systems can hallucinate inaccurate information. So the answer is, depending on the circumstances, AI might get it wrong and it could be your responsibility.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.