The rapid development of artificial intelligence (AI) is transforming the world. The International Data Corporation predicts that global spending on AI will reach US$37.5 billion this year and US$97.9 billion by 2023.1

As businesses invest in projects that use AI software and platforms, it is important to consider the liability and litigation issues that arise.

Developing law: liability for artificial intelligence

As companies take advantage of the benefits offered by AI, new fields of liability emerge. The civil law provides ample opportunity for those damaged by AI-powered decisions to seek compensation, damages or other redress. For example, products with inbuilt AI will clearly fall under existing statutory safety and fitness for purpose laws, and the usual norms of product liability law will apply to determine liability, causation and damages. For products and services that use AI (and strict liability instances aside), contractual limitations on liability, terms of use, warnings and notices, exclusions and indemnities will be just as effective as if the product or service relied on human intelligence.

But more complex uses of AI will test the boundaries of current laws and likely give rise to new examples of liability, albeit under existing legal frameworks. The Robo-Debt class action currently being explored against the Australian Government will use the existing administrative law to challenge the validity of the Government's reliance on an algorithmic system to automate the determination of welfare debts.2

Companies looking to assess legal risk from the implementation of AI need to take a holistic approach to liability, assessing their risk at multiple levels: liability for the intention and effect of an AI system, liability for the performance of the algorithm, and liability arising from the data used to train the algorithm.

Liability for intention and effect

Companies must be satisfied that their use of AI is socially justifiable and legally acceptable. They should be clear about the problem that the AI is seeking to address, and be vigilant to ensure that the algorithm is operating as intended. The absence of a human decision maker in a process does not mean that liability for the unlawful acts of an AI powered decision is avoided. For example, unlawful discrimination against a person will be just as unlawful if the discriminatory decision was made by an AI tool rather than a person. Also, EU and US regulators have pursued cases of AI pricing algorithms causing companies to behave in cartel-like ways, notwithstanding there was no human involvement in the price setting. In Australia, the ACCC's Digital Platforms Inquiry flagged personalised pricing algorithms as an area for concern and monitoring. These algorithms set pricing based on data about perceived need and capacity to pay.

Companies should also take a macro view as to whether the intention of the AI they propose to use is consistent with good corporate behaviour. AI algorithms that make decisions affecting individuals' rights can have consequences for a company's reputation even if legal obligations are not contravened. Companies should take note of a growing body of human rights law that is being developed in relation to AI and legislation in order to ensure the ethical development of AI products.3

Liability for the actions of the algorithm

Proving the method by which an AI algorithm reached a decision is particularly complex and, in a litigation sense, may be beyond human expert explanation. More likely (and strict liability instances aside), a party seeking to defend a decision reached by algorithm will seek to prove that the outcome was within reasonably acceptable parameters.

This will require consideration as to the design of the algorithm itself, the data that the algorithm has trained on and the testing of outcomes. What is acceptable will evolve over time.

A well-known example of algorithmic error is the Uber self-driving car that did not recognise a woman walking a bicycle as an object that required it to stop or take avoidance action, causing a fatality. In that case, a lack of testing and previous data on the LIDAR identification of a human walking a bicycle caused the algorithm to reach several incorrect conclusions, before the brakes were applied too late to avoid the collision.4

The example highlights the intersection between the legal notion of foreseeability and the training of an AI system to account and test for all foreseeable outcomes. It must be ensured that the AI's decision making stays within parameters as more data is applied to it and its decision making processes evolve.

A careful auditing process will be critical in establishing the credibility and reliability of an AI system. The UK Information Commissioner's Office has identified five risk areas for analysis and consideration in AI decisions:5

  1. Meaningful human reviews in non-solely automated AI systems.
  2. Accuracy of AI systems outputs and performance measures.
  3. Known security risks exacerbated by AI.
  4. 'Explainability' of AI decisions to data subjects.
  5. Human biases and discrimination in AI systems.

Liability for the data used in AI algorithms

While it is obvious that incorrect or insufficient data will cause an AI algorithm to make erroneous decisions, particular caution also needs to be had in relation to the collection, use and disclosure of the data that trains or underpins the algorithm. For AI algorithms dealing with people, it is critical to ensure the protection of personal information and compliance with privacy laws.

Companies will be liable for the collection and use of personal information in an AI system, including ensuring that information has not been collected and stored in contravention of privacy laws. Similarly, there is an ongoing obligation to maintain the security and integrity of personal information.

Additionally, algorithms must be tested to ensure that the intended use does not result in the inadvertent disclosure of personal information, such as through model inversion. Model inversion is an AI risk that arises when a user has some data about a person, but can then establish other information about the person by observing the outcome of the algorithm. The issue can arise even if the personal information in the data set has been de-identified, because some models can accurately predict the parameters of the de-identified information to re-identify the particular individual. The same situation would apply to corporate data, resulting in the inadvertent disclosure of sensitive or confidential information, even though that information may not be contained in specie in the data set.

The protection of data from model inversion risk needs to be considered in testing of AI such that personal information or confidential commercial information is not disclosed as part of normal use, but also as a security measure to ensure that a malicious actor could not obtain that information through intentional misuse of the system.

Implications

The issue of liability for AI is as far-reaching as the potential use cases. In many cases, liability for AI will be straight-forward and will not test the boundaries of established liability frameworks. However, complex systems will require careful thought and legal analysis. Companies should also have regard to the significant amount of policy development that is underway across the world to establish guidelines on the acceptable parameters of AI use.

Footnote

1 See Worldwide spending on Artificial Intelligence systems, 4 September 2019: https://www.idc.com/getdoc.jsp?containerId=prUS45481219

2 This class action has only recently been foreshadowed. See Government's 'robo-debt' recovery scheme facing class action, 17 September 2019: here.

3 For more information on developments in the EU, USA and Australia see: https://corrs.com.au/insights/how-are-ai-regulatory-developments-in-the-eu-and-us-influencing-ai-policy-making-in-australia

4 For a detailed explanation of the algorithmic issues that caused the fatality see: https://www.economist.com/the-economist-explains/2018/05/29/why-ubers-self-driving-car-killed-a-pedestrian

5 See: https://ai-auditingframework.blogspot.com/2019/07/developing-ico-ai-auditing-framework.html

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Chambers Asia Pacific Awards 2016 Winner – Australia
Client Service Award
Employer of Choice for Gender Equality (WGEA)