The Financial Post interviews Norton Rose Fulbright Canada LLP's patent and trademark lawyer Maya Medeiros on Artificial Intelligence's discriminatory biases.

Despite all of the advances in the field of artificial intelligence (AI), experts reveal that these technologies are not immune from some of the less-than-admirable tendencies which afflict humans.

As recently reported by the Financial Post, experts have noted increasing biases that plague the decisions made by AI software. Specifically, AI outputs have been found to discriminate on the bases of race, ethnicity, gender and disability.

This phenomenon presents novel challenges to precisely the areas that have historically been susceptible to human lapses in judgment. One such area, as noted by Maya Medeiros, a patent and trademark lawyer at Norton Rose Fulbright Canada LLP, is employment and human resources management. Decisions with regard to hiring, promotion and firing, Ms. Medeiros states, are often being made or influenced by AI despite biases inherent in particular software, biases of which employers may not be aware. The discriminatory biases may manifest in various forms, from imparting negative scores in a cognitive emotional analysis of a video interview on the account of an individual's disability, to discounting an individual's work ethic on account of a period of unemployment due to pregnancy.

In order to prevent decades of progress in the area of human rights from being undermined by what is perceived to be objective analysis on the part of AI, it is imperative that those who build, as well as those who use, the technology take accountability for its shortcomings. In that regard, Ms. Medeiros states that "[e]mployers need to ensure that AI embeds proper values, that its values are transparent and that there is accountability, in the sense of identifying those responsible for harm caused by the system." A way to do this, Ms. Medeiros suggests, is to take advantage of AI's ability to learn and evolve by providing it with training data that reflects diverse values early on, and to continue to monitor the data throughout the machine learning process.

Employers would indeed be well advised to take an active role in counteracting any potentially improper decisions made or influenced by AI to ensure compliance with employment and human rights laws, as it is the employers themselves that are likely to be held ultimately responsible for any violation of such. Additionally, it may be advisable for employers and others relying on AI to obtain contractual indemnification from their respective developers  to avoid liability for aspects of AI technology beyond their control.

Written in collaboration with Alexandre Kokach, articling student.


About Norton Rose Fulbright Canada LLP

Norton Rose Fulbright is a global law firm. We provide the world's preeminent corporations and financial institutions with a full business law service. We have 3800 lawyers and other legal staff based in more than 50 cities across Europe, the United States, Canada, Latin America, Asia, Australia, Africa, the Middle East and Central Asia.

Recognized for our industry focus, we are strong across all the key industry sectors: financial institutions; energy; infrastructure, mining and commodities; transport; technology and innovation; and life sciences and healthcare.

Wherever we are, we operate in accordance with our global business principles of quality, unity and integrity. We aim to provide the highest possible standard of legal service in each of our offices and to maintain that level of quality at every point of contact.

For more information about Norton Rose Fulbright, see nortonrosefulbright.com/legal-notices.

Law around the world
nortonrosefulbright.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.