Earlier this year, a U.S. District Judge ordered some lawyers to pay a fine of USD 5,000 for acting in bad faith making false and misleading statements to the court. What had the lawyers done? As a part of their arguments brief, they had cited case-law that did not exist. So, how did they find the case law? It had been fabricated by an Artificial Intelligence (AI) tool. This is one of the many concerns with AI tools which have surfaced over the last few months.

Experts believe that the early proponents of AI are having its "Oppenheimer moment". In the post world war era, after the bombing of Hiroshima and Nagasaki, J. Robert Oppenheimer - the genius behind the first nuclear weapons - was consumed by the guilt of having built the most destructive weapon of the age. He went on to become a staunch proponent of disarmament, calling for an end to the arms race. Today, we are seeing a new kind of arms race. Companies, countries and technologists across the globe are rushing to prove their mettle in developing artificial intelligence tools. Some of these tools are being released without a thorough risk assessment. As a result, experts are sounding alarm bells. Recently, in March 2023, the Future of Life Institute issued a letter, signed by more than 1000 people, including Elon Musk. The letter calls for a pause on advanced AI development until shared safety protocols are developed, implemented and audited by independent experts.

In this issue of TechTonic, we discuss the key legal concerns arising out of AI development, list the core principles that should underpin AI development, and compare the various legal/regulatory approaches that jurisdictions around the world are taking in this regard.

CONCERNS WITH AI

AI broadly encompasses technology that incorporates a degree of human cognitive ability and autonomy in completing its purpose. AI has been incorporated across most industries, with the technology deployed to make determinations about health, medicine, employment, creditworthiness and even criminality.

Poor design or inappropriate application of AI tools can lead to the emergence of several ethical concerns. While the list is endless, some of the key concerns that top the list include:

  1. Algorithmic Bias: AI algorithms can inherit biases from the data sets that they are trained on. This can perpetuate human bias. For example, a facial recognition tool may be less accurate for certain racial groups. This can lead to algorithm induced discrimination.
  2. Privacy: AI systems are trained on massive data sets. This raises concerns about how such data is obtained and processed. Several regulators around the world have started looking at whether training data has been obtained legally and ethically? There are also cybersecurity concerns, such as hacking, data manipulation, among others.
  3. Autonomy and Accountability: As AI tools learn from their own performance and become increasingly autonomous, the question of who can be held liable for faults of an AI algorithm increase. For example, an autonomous self-driving car may get into an accident, posing questions about who can be held accountable.
  4. Deepfakes: Image and video-based AI tools can generate near-real outputs, which, if used with malafide intent, can be extremely misleading. This can be a powerful weapon to spread misinformation / disinformation and have severe political and economic consequences.

KEY PRINCIPLES

To mitigate the risks that arise from AI and ensure that AI is developed responsibly, the FET principles (abbreviated for Fairness, Ethics and Transparency) are of prime importance.

  1. Fairness: The principle of "Fairness" in AI developments seeks to ensure that AI systems do not discriminate against individuals or groups based on their race, gender, or other protected characteristics. It involves addressing biases in data and algorithms to provide equitable treatment to all users.
  2. Ethics: The principle of "Ethics" in AI focusses on considering the moral implications and potential societal impact of AI.
  3. Transparency: The principle of "Transparency" involves making AI algorithms explainable and understandable. It allows users to comprehend how AI systems work, and how they arrive at their outputs / conclusions / decisions.

By incorporating FET principles into AI development, or for that matter, development of any technology, researchers and engineers can work towards building AI systems that are more trustworthy, ethical, and beneficial to society.

UNESCO has built upon these core principles, and identified 10 AI-development focussed principles that lay out a human-rights driven approach to AI. UNESCO urges that all AI development should be underlined by Proportionality, Safety and Security, Privacy, Collaboration, Responsibility and Accountability, Transparency and Explainability, Human Oversight and Determination, Sustainability, Awareness and Literacy, and Fairness and Non-Discrimination.

A similar approach has been adopted by the Organisation for Economic Cooperation and Development (OECD) that has also formulated human-centric principles for AI development. Additionally, the OECD AI Policy Observatory combines resources from across the OECD and its partners from all stakeholder groups. It facilitates dialogue among stakeholders and offers multidisciplinary, evidence-based policy analysis on AI's areas of impact.

LEGAL APPROACHES

Different jurisdictions around the world have sought to adopt different variants of these principles into their regimes. While some are in the legislative process, others are being introduced as regulations supporting existing technology laws. Largely, they all call for adopting a risk-based approach to AI, ensuring transparency in AI development as well as AI outputs, and imposing heavy penalties to deter illegal and unethical development. Below, we discuss some key developments around the world.

Risk vs Innovation

In 2021, the European Commission tabled a proposal for an EU regulatory framework on AI, which manifested as the general approach on the Artificial Intelligence Act, (AI Act) in December 2022, shortly after the launch of popular AI tools ChatGPT and Google Bard. In June 2023, the European Parliament adopted the AI Act, paving the way for the European Council to debate the final text of the AI Act. The EU proposed a risk-based approach to regulating AI. Some AI systems presenting 'unacceptable' risks would be prohibited. This would include facial-recognition technology in public places, and AI-based social scoring and biometric categorisation. A wide range of 'high-risk' AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. Those AI systems presenting only 'limited risk' would be subject to very few obligations.

Canada has taken a similar approach. In 2022, the Canadian government submitted a comprehensive draft law "Draft C-27" or the Digital Charter Implementation Act 2022. A part of this robust package is the proposed Artificial Intelligence and Data Act (AIDA). Anyone responsible for an AI system – namely a person who designs, develops, or makes available for use the AI system or manages its operation – must assess whether it is a "high-impact system". An additional accompanying paper that explains the Canadian proposed law delves deeper into the concept of "high risk systems". It also recognizes that multiple parties are often involved in designing, developing, deploying, and using AI systems. For example, it mentions that contributors to general purpose open-source AI software would not be regulated—although entities which deploy "fully-functioning" high impact systems would be regulated. The accompanying paper also prohibits certain practices in the handling of data and AI systems that may cause serious harm to individuals or their interests.

Contrastingly, the UK seeks to adopt a relatively more light-touch, context/ application-based approach. In March 2023, the UK Government put out a whitepaper, calling for a pro-innovation framework. Instead of establishing a standalone body for AI regulation, the whitepaper proposes to enhance the remit and capacity of existing regulators to develop a sector-specific, and therefore, application-specific approach. The UK also seeks to set up a collaborative approach where regulators – such as Information Commissioner's Office, the Competition and Markets Authority, the Financial Conduct Authority, Office of Communications, Human Rights Commission, among others - will need to jointly adhere to governing principles to foster trust and clarify.

The approach in the US has also been light-touch, application-based, led separately by different regulators, as in the UK, but largely adopts a risk-based framework, as in China and the EU. As for light-touch and application-based initiatives, in October 2022, the White House Office of Science and Technology Policy published a Blueprint for the Development, Use and Deployment of Automated Systems. This is a principle-based, non-binding document. Further, the US Federal Trade Commission, the competition watchdog in the US, issued blog posts asking businesses to avoid unfair and misleading practices related to AI, including "Keep your AI claims in check" and "Chatbots, deepfakes, and voice clones: AI deception for sale." At regional level, New York City introduced one of the first AI laws in the US, effective from January 2023, which aims to prevent AI bias in the employment process. As for risk-based initiatives, in January 2023, the US Department of Commerce, released its Artificial Intelligence Risk Management Framework as a voluntary guide to managing AI risks.

In the UAE, the approach has been innovation led as well. While there is no dedicated law governing AI, the aim has been to minimise AI-related risks, encourage AI adoption in public and private sector, and foster dialogue. Specifically for the risks, the UAE has set up an AI Ethics Committee to develop principles and standards. The committee has published its own principles, guidelines, and self-assessment toolkit. As for adoption, the UAE has a dedicated Artificial Intelligence, Digital Economy, and Remote Work Applications Office (AI Office), which recently launched a unique guide that sets out the ways in which these emerging technologies can be leveraged to benefit key sectors including education, healthcare, automotive and media, while also addressing several important issues like data privacy protection, reliability, and quality control of AI outputs. As for dialogue, the AI Office and Google announced the AI Majlis Series in 2023, a new initiative to be held in the UAE every quarter, as a new gathering that joins officials from the government, academia, public and private sector in a discussion on AI public policies.

Transparency and Ethics

The EU AI Act also seeks to place an obligation on creators of AI models. They mandate registration of models with an EU database prior to market-entry. It calls for making details of copyrighted data used to train AI systems publicly available. It also mandates stating when content is generated by AI, in order to help identify deep fakes.

In China too, there are provisions tackling algorithmic accountability, which focus on content management, tagging or labelling, transparency, data protection and fair practices. Additional regulations apply in certain areas - for example with regard to minors or e-commerce services. Further, China seeks to regulate "deep synthesis" technologies to combat deep fakes and put in place a "safety assessment" before AI systems are released to the public, much like the EU. Specifically, the regulation requires AI-generated content to be truthful and accurate and prohibits content that undermines state power or contains terrorist or extremist propaganda, violence, obscene and pornographic information, among others.

In July 2023, the US saw a landmark self-regulation initiative by key technology players, such as OpenAI, Alphabet and Meta, have made voluntary commitments to implement measures - such as watermarking AI-generated content - to help minimise AI risks. As a part of industry-led voluntary commitments, players in the US also pledged to thoroughly test systems before releasing them and share information about how to reduce risks.

Fines and Penalties

The EU AI Act envisages high fines. For example, a breach of these 'Prohibited AI practices' is subject to a fine of up to EUR 40,000,000 or if the offender is a company, of up to 7% of their global turnover in the previous year. The quantum of the fine is indicative of how seriously the EU is taking the development and use of such prohibited practices. The EU is also working on targeted harmonisation of national liability rules of EU member states, aiming to complement the AI Act by facilitating civil liability claims for damages.

In Canada, contravention of AIDA's governance and transparency requirements can lead to fines up to the greater of CAD 25,000,000 and 5% of global revenues, and for individuals, a fine at the discretion of the court or imprisonment.

On the other hand, in the UK, the whitepaper does not envisage allocation of liability during the AI life cycle, as it believes it may be premature at this stage to make a definitive conclusion on liability.

CONCLUSION

Given the nascency of AI technologies, there is no right or wrong approach to AI regulation. There are just different approaches, which may be viewed as different means to the same end – ensuring that AI systems remain safe, secure, and fair. This is non-negotiable, especially given the instances of harm already seen, and the alarm bells sounded by experts and developers themselves. Therefore, regardless of the approach adopted, it would be essential for all regulations to be guided by the principles of Fairness, Ethics and Transparency to ensure that they remain robust and future proof.

At the same time, it is worth noting that encumbering regulations can stifle the growth of this revolutionary technology and be ultimately harmful for users. Therefore, countries, users and companies will benefit from enabling regulations that duly account for the risks, but do not seek to overregulate.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.