On June 21, 2023, Senate Majority Leader Chuck Schumer joined the Center for Strategic and International Studies (CSIS) to launch his SAFE Innovation Framework, a comprehensive approach to address challenges associated with artificial intelligence (AI). He also announced the launch of the AI Insights Forum - his plan to convene AI experts in various fields to discuss emerging issues on the topic and how to best guide Congress in regulating on this issue. In his remarks about these proposals, Senator Schumer discussed how Congress must be at the forefront of maximizing AI's benefits while protecting the American people. He emphasized the importance of safe innovation that balances adopting new technologies with the appropriate guardrails in place.

The next day, the Biden Administration announced a new National Institute of Standards and Technology (NIST) Public Working Group on AI, which will build on NIST's Risk Management Framework. The group's goal is to respond to the rapidly growing challenges presented by generative AI. This announcement was made after President Biden met with AI experts and researchers this week as part of the administration's plan to address the opportunities and challenges with AI.

These developments are significant because they indicate that multiple US government actors are taking concrete steps to (eventually) regulate AI. These proposals in the US come on the heels of the European Union (EU) moving forward with negotiations on the EU AI Act, along with other regulators globally also looking to address this issue. Companies operating in the AI space should be aware of these proposals and note that there is significant appetite for regulating the use and development of AI tools.

In this post, we have summarized the SAFE Innovation Framework, the AI Insights Forum, and the NIST Public Working Group on AI. We will continue to stay on top of notable updates in this area through the WilmerHale Privacy and Cybersecurity Law Blog.

Senator Schumer's SAFE Innovation Framework

The SAFE Innovation Framework stands for Security, Accountability, Foundations, and Explainability. The policy objectives of each pillar are discussed below.

Security

It is unclear what the capabilities of AI will be several years or decades from now, and it is possible that the technology could be used by bad actors for illicit purposes such as extortionist financial gain or political upheaval. Therefore, the SAFE Framework recognizes the importance of instilling guardrails to prevent malicious uses of AI.

The framework also acknowledges the need for economic security for America's workforce, since AI is disrupting jobs across many industries. While low-income workers are at greatest risk, AI is also impacting skilled occupations such as marketing, software development, banking, and law. AI could further exacerbate the erosion of the middle class, so the framework intends to take measures that prevent job loss or misdistribution of income.

Accountability

The SAFE Innovation framework also considers accountability, which entails regulating how AI is developed, audited, and deployed. For instance, AI has the potential to spread misinformation and perpetrate bias, and it also raises novel intellectual property concerns. AI especially poses a risk to vulnerable populations such as minors and low-income individuals. Hence, the framework supports the deployment of responsible systems to address these concerns.

Foundations

AI itself neither supports nor opposes American values like liberty, civil rights, and justice, so the algorithms must be designed to align with these democratic foundations. One concern about AI is the potential harm it may bring to the electoral process, such as fabricating images, footage, and statements of political candidates. In addition, chatbots could be deployed on a large scale to target votes for political persuasion. The framework accounts for designing AI systems that promote democratic values that align with what matters to the American people.

Explainability

AI systems work on complex algorithms that cannot be understood by most users, but the systems should be transparent and allow the user to understand how answers are being generated. Senator Schumer noted in his remarks that this pillar should be Congress' top priority, and that the private sector must take the lead in addressing this challenge to tackle to "black box" of AI.

Senator Schumer's AI Insight Forums

Senator Schumer also noted that traditional approaches to policymaking will not suffice for AI legislation. He discussed his plan to convene AI experts at Congress later this year to discuss a new process for developing AI legislation. These forums would bring together top AI developers, executives, scientists, advocates, community leaders, workers, national security experts for panels and roundtable discussions to discuss AI challenges and a viable path forward.

Topics for the Insight Forums would include: AI innovation, copyright & IP, workforce, national security, AI's role in our social world, and transparency. These forums would represent an evolution from the traditional Congressional hearing process that entails opening statements and each member asking questions for five minutes at a time. This redesigned approach reflects Congress' understanding of the need to more quickly and effectively respond to changes related to AI.

Biden NIST Public Working Group on AI

US Secretary of Commerce Gina Raimondo announced that the new NIST Public Working Group builds on the success of the NIST AI Risk Management Framework (NIST AI RMF). The framework, released in January 2023, was intended to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI systems. The group will help NIST develop key guidance to help organizations address the risks posed by generative AI. The group will consist of technical experts from across the public and private sectors. In the short-term, the group will focus on gathering input that describes how the NIST AI RMF may be used to support the development of generative AI technologies. In the mid-term, the group will support NIST's work on testing and evaluating generative AI. In the long-term, the group will help ensure that generative AI technologies are used productively and that risks of these technologies are managed appropriately.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.