Chatbots are computer applications programmed to mimic human behaviour using machine learning and natural language processing. Chatbots can act autonomously and do not require a human operator. Given this freedom, chatbots do not always act in a manner that is fair and neutral – they can go wild with unintended consequences. For example, a chatbot "e-shopper" was given a budget of $100 in bitcoin and quickly figured out how to purchase illegal drugs on the Darknet. Another chatbot was programmed to mimic teenager behaviour using social media data. By the afternoon of her launch, she was firing off rogue tweets and taken offline by the developer. Chatbots were pulled from a popular Chinese messaging application after making unpatriotic comments. A pair of chatbots were taught how to negotiate by mimicking trading and bartering and created their own strange form of communication when they started trading together. Online dating sites can use chatbots to interact with users looking for love and increase user engagement. Chatbots can go rogue in chat rooms to extract personal data, bank account information, and stoke negative sentiment. Chatbots are increasingly being used by businesses as customer service agents. Even these legitimate and well-meaning corporate chatbots may also go wild.

Chatbots can involve ethical decision-making when they autonomously interact with users. For chatbots to be accepted for commercial use they will have to meet certain minimum ethical standards. Legal and ethical responsibility are inextricably linked and commercial reality in effect imposes an imperative on businesses to address the corresponding legal issues and limit reputation risk. Care should be taken to ensure these company "representatives" act ethically.

Businesses can mitigate the risks of a corporate chatbot acting adverse to the designer's intent with the following ethical considerations:

  • Chatbots should embed human values;
  • Chatbots should have transparency into the algorithms and data that drive its behaviour; and
  • Chatbots may cause harm with stakeholder's being accountable.

Chatbots should embed human values

Chatbots should be designed at inception to embed human values in order to avoid breaching human rights and creating bias. Chatbots will not change the fact that those who breach legal obligations in relation to human rights will still be responsible for such breaches. However, the use of AI may make determining who is responsible more complex.

Designers, developers, and manufacturers of AI will wish to avoid creating unacceptable bias from the data sets or algorithms used. To mitigate the risk of bias, they will need to understand the different potential sources of bias, and the particular AI system will need to integrate identified values and enable third party evaluation of those embedded human values to detect any bias.

Addressing such risks by attempting to embed human values in chatbots may be difficult for a range of reasons. For example, the definition of what is a societal norm may differ over time, between markets, and between geographies.

How can transparency be achieved?

Chatbots should incorporate a degree of ethical transparency in order to engender trust or otherwise market uptake may be impeded. This will be particularly important when autonomous decision-making by chatbots have a direct impact on the lives of market participants. How can such ethical transparency be achieved? The AI systems that control chatbots should be open, understandable and consistent to minimise bias in decision-making. Chatbots should deliver transparency as to their decisions or actions.

Accountability for the harm caused by chatbots

Legal systems will need to consider how to allocate legal responsibility for loss or damage caused by chatbots. As they proliferate and are allowed to control more sensitive functions, unintended actions can become increasingly harmful. There should accordingly be program-level accountability to explain why a chatbot reached a decision to act a certain way.

Legislative initiatives are being considered in a number of jurisdictions to address questions of accountability. These include a registration process for AI, identity tagging, criteria for allocating responsibility, and an insurance framework.

The complexity of AI systems in combination with emerging phenomena they encounter mean that constant monitoring of chatbots, and keeping humans "in the loop", may be required. Although keeping humans "in the loop" may help to address accountability, it may also limit the intended benefits of autonomous decision-making. A balance will need to be struck.

Please also see our two previous posts for more information.


About Norton Rose Fulbright Canada LLP

Norton Rose Fulbright is a global law firm. We provide the world's pre-eminent corporations and financial institutions with a full business law service. We have more than 3800 lawyers and other legal staff based in more than 50 cities across Europe, the United States, Canada, Latin America, Asia, Australia, Africa, the Middle East and Central Asia.

Recognized for our industry focus, we are strong across all the key industry sectors: financial institutions; energy; infrastructure, mining and commodities; transport; technology and innovation; and life sciences and healthcare.

Wherever we are, we operate in accordance with our global business principles of quality, unity and integrity. We aim to provide the highest possible standard of legal service in each of our offices and to maintain that level of quality at every point of contact.

For more information about Norton Rose Fulbright, see nortonrosefulbright.com/legal-notices.

Law around the world
nortonrosefulbright.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.