BACKGROUND

The Organisation for Economic Co-operation and Development (OECD)2 Council Recommendation on Artificial Intelligence (AI), first adopted in May 2019 and revised in November 2023, establishes an international standard on AI.3 It aims to foster innovation and trust in AI, ensuring respect for human rights and democratic values. The recent revision updates the definition of an "AI System" to stay aligned with technological advancements, including generative AI systems.

KEY REQUIREMENTS

The OECD Council's Recommendation on Artificial Intelligence encompasses comprehensive guidelines detailing fundamental principles and policy recommendations to govern AI systems globally. These include:

  1. Inclusive Growth, Sustainable Development, and Well-being: This principle advocates for AI to drive broad-based growth, integrate sustainable development goals, and foster well-being. It highlights the importance of AI in enhancing productivity and economic gains while mitigating inequality and environmental impact.4
  2. Human-Centred Values and Fairness: This aspect emphasises respect for human rights, democratic values, and diversity. It underlines the need for AI systems to be fair and non-discriminatory, recognise and correct biases, and ensure that AI applications respect human dignity and privacy rights.5
  3. Transparency and Explainability: It is crucial that AI systems are transparent and that humans can understand their operations. This principle stresses the need for clear communication about AI systems' capabilities and limitations, ensuring people can understand and challenge AI-driven decisions.6
  4. Robustness, Security, and Safety: This principle requires AI systems to function reliably, be resilient to vulnerabilities, and be safe throughout their lifecycle. It mandates that AI systems should be secure against unauthorised access and malicious use and safe regarding their impact on people and the environment.7
  5. Accountability: The Recommendation calls for clear accountability frameworks for AI systems. This involves defining responsibilities for AI developers, deployers, and operators, ensuring they can be held accountable for the functioning and impact of AI systems.8

Furthermore, the Recommendation urges governments to adopt national policies and foster international cooperation.9 This includes promoting AI research and development, nurturing a conducive digital ecosystem, crafting policies that enable AI innovation while safeguarding public interests, and enhancing human capacity to adapt to AI-driven changes in the labour market. International cooperation is encouraged to build a consensus on standards for trustworthy AI.

IMPLICATION

The implications of the OECD Recommendation are far-reaching and 1011

  1. Ethical and Responsible AI Development: By adhering to these standards, organisations are expected to develop AI technologies that are not only innovative but also ethical and responsible. This includes addressing ethical dilemmas and ensuring AI decisions are fair and just.
  2. Impact on Policies and Regulations: These principles will shape future AI policies and regulations globally. This includes the creation of new legal frameworks and amendment of existing laws to incorporate these AI standards.
  3. Influence on Organisational Strategy: Organisations must align their AI strategies with these principles to ensure compliance and foster trust among stakeholders, including customers, regulators, and the public. This may require significant changes in AI governance, risk management, and operational processes.

CONSIDER

Legal professionals and policymakers should consider the implications of this Recommendation on current and future AI-related initiatives. AI systems need to be aligned with these principles, ensuring they contribute positively to society and the economy while safeguarding human rights and democratic values.

CONCLUSION

The OECD Council's Recommendation on AI represents a significant step towards responsible AI stewardship globally. It provides a framework for countries and organisations to develop AI technologies that are ethical, transparent, and beneficial to society. Adhering to these principles is crucial for fostering innovation and trust in AI applications worldwide.

Footnotes

1 Setyawati Fitrianggraeni holds the position of Managing Partner at Anggraeni and Partners in Indonesia. She also serves as an Assistant Professor at the Faculty of Law, University of Indonesia, and is currently pursuing a PhD at the World Maritime University in Malmo, Sweden. This article is co-authored by Sri Purnama, Junior Legal Research and Jericho Xafier Ralf, Trainee Associate Analyst at Anggraeni and Partners.

2 The Organisation for Economic Co-operation and Development (OECD) is an intergovernmental organisation which was founded in 1961 with the purpose of stimulating economic progress and world trade. OECD goal is to shape policies that foster prosperity, equality, opportunity, and well-being for all. There are currently 38 member countries within the OECD which are: Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkiye, United Kingdom, and United States.

3 The reference in this document pertains to the version dated 6 December 2023, as obtained from the OECD website, accessible at AI-Principles Overview – OECD.AI. Please note that subsequent amendments or updates to the legal instrument may have occurred after this date. Readers are advised to consult the latest version of the document for the most current information.

4 OECD, "Recommendation of the Council on Artificial Intelligence". Section 1, IV. Underlines, 1.1. inclusive growth, sustainable development, and well-being. OECD Legal Instruments, accessed on 6 December 2023.

5 Ibid., 1.2. human-centered values and fairness.

6 Ibid., 1.3. Transparency and explainability.

7 Ibid., 1.4. Robustness, security and safety.

8 Ibid., 1.5. Accountability.

9 Ibid., Section 2: National policies and international co-operation for trustworthy AI.

10 In 2019, OECD Principles on AI has been adopted by 42 countries. They agreed to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair, and trustworthy, see, OECD, "Forty-two countries adopt new OECD Principles on Artificial Intelligence", https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm accessed on 9 December 2023.

11 Several countries have national ethical frameworks and AI principles that align with the OECD AI Principles. In May 2023, 17 of these were included in the OECD database, such as Japan, Korea, and India, and continues growing. Many countries are considering AI-specific regulatory approaches. See, Lucia Russo and Noah Order, "How Countries are Implementing the OECD Principles for Trustworthy AI, 2023", accessed on 9 December 2023.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.