29 countries sign the Bletchley Declaration on AI

This month saw the first Global AI Summit hosted by the UK, which resulted in the 'Bletchley Declaration' being signed by 29 nations, including the U.S., the UK, the EU, and China. This event saw a collective recognition of the opportunities and challenges presented by AI technology and seeks to make the case for global collaboration in addressing risks. The Declaration emphasises the need for a joint effort to comprehend and manage the potential dangers associated with AI, ensuring its safe and responsible development and deployment. The agreement was reached during the AI Safety Summit 2023, held at Bletchley Park in the UK.

The Bletchley Declaration covers many aspects of AI development but in particular, it aims to address AI-related risks, including:

  1. 'Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies'; and
  2. 'Building respective risk-based policies across [the] countries to ensure safety in light of such risks, collaborating as appropriate while recognising [the countries'] approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.'

As a follow-up initiative, the Republic of Korea has committed to co-hosting a virtual summit on AI within the next six months. Subsequently, France will host an in-person summit in the Autumn of 2024.

President of the United States issues Executive Order on AI

The White House issued an Executive Order just before the UK hosted Ai Summit, aimed at how regulation and the policy landscape for Artificial Intelligence (AI) in the United States may evolve. The intention of the Executive Order talks of protection of various groups including consumers, patients, students, workers and children.

The Executive Order includes:

  1. Sharing Safety Test Results: Developers of powerful AI systems must share safety test results and critical information with the U.S. government, especially if these systems pose risks to national security, economic security, or public health and safety.
  2. Rigorous Test Standards: The National Institute of Standards and Technology (NIST) will establish rigorous standards and conduct extensive red-team testing before AI systems are publicly released. These standards will be applied to critical infrastructure sectors to ensure safety.
  3. Biological Synthesis Screening: New standards for biological synthesis screening will be developed to prevent the risks associated with using AI to engineer dangerous biological materials, ensuring appropriate screening and risk management.
  4. Content Authentication: The Department of Commerce will establish standards for detecting AI-generated content, authenticating official content, and labelling AI-generated content to protect against fraud and deception.
  5. Cybersecurity Program: An advanced cybersecurity program will be established to develop AI tools that identify and fix vulnerabilities in critical software and enhance network security.
  6. Privacy: Congress is urged to pass bipartisan data privacy legislation. Federal support will be prioritised for the development of privacy-preserving techniques, and guidelines will be developed for federal agencies to evaluate the effectiveness of such techniques in AI systems.
  7. Responsible AI, Equity, and Civil Rights: Actions will be taken to combat algorithmic discrimination, ensure fairness in the criminal justice system, and provide clear guidance to prevent AI algorithms from exacerbating discrimination.
  8. Protecting Consumers, Patients, and Students: Measures will be taken to advance the responsible use of AI in healthcare, protect against unsafe healthcare practices involving AI, and support the deployment of AI-enabled educational tools.
  9. Protecting Workers: Principles and best practices will be developed to mitigate the negative impacts of AI on workers, addressing issues such as job displacement, labour standards, workplace equity, health, safety, and data collection.
  10. Promoting Innovation and Competition: Efforts will be made to catalyse AI research, promote a fair and competitive AI ecosystem, and expand the ability of skilled individuals to study, stay, and work in the U.S., focusing on critical areas of expertise.
  11. Advancing American Leadership Abroad: The U.S. will collaborate with other nations to establish international frameworks for the responsible deployment of AI, accelerate the development and implementation of AI standards, and promote the safe and rights-affirming development and deployment of AI abroad.
  12. Ensuring Responsible and Effective Government Use of AI: Guidance will be issued for agencies' use of AI, including standards to protect rights and safety, improve AI procurement, and strengthen AI deployment. Rapid hiring of AI professionals will be conducted to support government-wide AI initiatives.

While the Executive Order addresses several issues it does not cover everything that has been in on the agenda of those looking at the future of AI. For example, it does not cover questions such as AI-generated intellectual property. The primary focus seems to be on AI developers, emphasising the importance of safe and responsible AI systems before public deployment. The Executive Order also encourages an ethical approach to AI technology, although its full implementation and the subsequent effects on the AI landscape remain to be seen.

U.S. States to consider whether to follow Federal Government or diverge on AI

At the International Association of Privacy Professionals ('IAPP') AI Governance Global conference, State Senator James Maroney of Connecticut provided an overview of the current state of AI policy in the U.S.. He highlighted the specific AI priorities for state lawmakers in the upcoming legislative season, including government utilisation of AI, addressing algorithmic discrimination, and combating the influence of so-called 'deepfake' election advertisements. Maroney stressed the need for collaborative efforts among states to develop coherent and effective AI policies.

Maroney was sharing insights from his involvement in an informal group comprising lawmakers from nearly 30 states, alongside professionals from technology and law. He underscored the significance of aligning definitions across states, making compliance processes more manageable for businesses and professionals working in the AI sector.

Moroney highlighted Connecticut's Senate Bill 1103. This bill assigned the responsibility to the Connecticut Office of Policy and Management to craft AI policies for public entities. In his closing remarks, Maroney urged active participation from residents and professionals, encouraging them to engage with legislators. He emphasised the importance of providing valuable perspectives on the practical challenges associated with complying with proposed AI legislation, ensuring that the policies developed are robust, comprehensive, and effective.

G7 Leaders endorse Hiroshima AI process outputs

The leaders of the G7 Governments have highlighted the potential of advanced Artificial Intelligence ('AI') systems, specifically foundation models and generative AI, while recognising the need to manage risks and protect individuals and democratic values. They endorsed the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems (the 'Principles') and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (the 'Code of Conduct') (sic). The G7 urge organisations to adhere to these guidelines and instruct relevant ministers to accelerate the development of a comprehensive policy framework for AI cooperation by the end of the year. The aim is to create a global environment where AI systems are safe, secure, and trustworthy, fostering digital inclusion and minimising risks for the common good worldwide.

The Principles are as follows:

  1. Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.
  2. Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.
  3. Publicly report advanced AI systems' capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.
  4. Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.
  5. Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organizations developing advanced AI systems.
  6. Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
  7. Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
  8. Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
  9. Prioritize the development of advanced AI systems to address the world's greatest challenges, notably but not limited to the climate crisis, global health and education.
  10. Advance the development of and, where appropriate, adoption of international technical standards.
  11. Implement appropriate data input measures and protections for personal data and intellectual property.

The Code of Conduct calls for organisations to take specific actions with regard to each of the Principles. The Principles and the Code of Conduct will not be directly enforceable in any way, so it down to individual governments (and the EU) to legislate as they deem appropriate in regards to the Principles and the Code of Conduct.

Overview

Most of these recent announcements align with a key aspect of the Principles in their call for international cooperation. By urging organisations worldwide to adhere to these guidelines, the G7 nations seek to create a collaborative environment where diverse stakeholders work collectively to address the ethical challenges associated with advanced AI systems. These announcements are a reminder of how the world is grappling with the mostly unregulated realm of AI. They acknowledge the need for ethical considerations in technological advancements but also highlight the challenges of implementing these principles on a global scale. We will continue to monitor announcements and update on them in our publications.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.