Introduction

In recent months, Congress and the Executive Branch have been sprinting to learn about and regulate artificial intelligence (AI) systems in an attempt to catch up with their rapid technological advancement. AI industry leaders Anthropic, Google, Microsoft, OpenAI, and others have mobilized to secure seats at the regulatory and legislative table, including through signing onto the Biden Administration's voluntary code of conduct. In this Advisory, we provide an overview of recent policy and regulatory developments to assist firms in advancing their AI interests. See our June 6 Advisory on this subject for additional background.

The Biden Administration

White House Secures Private Industry Commitments to Self-Regulate AI Innovation— In July 2023, the Biden Administration secured voluntary commitments from seven of the leaders in AI (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) to "underscor[e] safety, security, and trust" in the development of AI technology. (Eight others signed on in September.) These include commitments to:

  • "Internal and external security testing of their AI systems before their release"
  • "Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights"
  • "Publicly reporting their AI systems' capabilities, limitations, and areas of appropriate and inappropriate use"

Executive Agencies Consider AI Adoption— In June 2023, the Biden Administration announced the launch of a new public working group focused on generative AI. The working group, led by the National Institute of Standards and Technology, "will help address the opportunities and challenges associated with AI that can generate content, such as code, text, images, videos and music." Executive agencies have also begun to incorporate AI into their overall strategies, with the Office of the Director of National Intelligence underscoring the importance of AI in its 2023-2025 Data Strategy. The Department of Defense (DoD) has approached AI with similar interest; the DoD's Chief Digital and Artificial Intelligence Office has indicated it will release an AI strategy later this year. In August, DoD also announced the creation of "Task Force Lima," which will evaluate and guide the implementation of AI technology across the DoD.

U.S. Copyright Office Clarifies and Applies Existing Copyright Law to Generative AI— Since launching its AI Initiative in March 2023, the U.S. Copyright Office (USCO—technically, part of the Legislative Branch) has focused on clarifying and applying existing frameworks to generative AI. As part of its AI Initiative, the USCO has published a statement of policy on the registration of works that contain generative materials, clarified the application of the USCO's updated guidance on royalties generated through blanket licenses under Section 115 of the Copyright Act, and hosted public listening sessions on the impact of generative AI and copyright law on software, visual arts, audiovisual works, and music and sound recordings. This summer, the USCO has hosted webinars providing copyright registration guidance for works that contain generative materials and perspectives on how other countries are approaching similar questions. The USCO also commented in the National Telecommunications and Information Administration's (NTIA) inquiry regarding AI accountability policy.

NTIA Receives Significant Response on AI Accountability Policy– In response to an April 2023 request for comments, the NTIA received more than 1,450 comments on AI system accountability measures and policies. The NTIA plans to use these comments to inform a report and policy recommendations on mechanisms that can create earned trust in AI systems.

Congressional Hearings and Leadership Engagement

Bipartisan Senators Closely Coordinating on Next Steps to Regulate AI— This year, Senate Majority Leader Chuck Schumer (D-NY) and Sens. Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD) held over 100 staff-level meetings with academic, association, and industry groups on AI-related issues in an effort to develop a framework for legislation. During a June 21 event, Sen. Schumer outlined the core tenets of this "SAFE Innovation" framework, which include (1) security; (2) accountability; (3) a foundation in democratic values; (4) explainability; and (5) supporting U.S.-led innovation. The Senate is expected to examine legislation informed by this framework as the year progresses, but the timeline for the bill's introduction is uncertain.

Leader Schumer Kicks Off Bipartisan AI Briefings— In July, Sen. Schumer laid out his legislative priorities for the remainder of 2023, including building on his SAFE Innovation framework. Following the letter, Sen. Schumer hosted a series of three closed, members-only briefings intended to educate senators on emerging AI issues. The briefings included a summary of the current state of AI, a confidential briefing on the national security implications of AI, and a discussion of the future of AI research. The briefings convened representatives from the public and private sectors alongside academia, but details of the discussions are limited. In a similar vein, Stanford University hosted an AI bootcamp to educate congressional staffers on the topic. The Senate is expected to continue hosting stakeholders for "AI Insight Briefings" following the August recess. The first of these closed briefings will take place on September 13, and will include executives from leading tech companies, including Tesla, Meta, and OpenAI, as well as the leaders of labor and civil rights organizations.

Congress Holds AI Hearings, Focus on Intellectual Property Issues— In addition to closed briefings, Congress has held several hearings on the topic. The Senate Judiciary Committee has taken a leading role, holding a series of hearings on AI and human rights, copyright, and general regulatory principles. The need to modernize intellectual property law to protect creators from content theft by AI models emerged as a theme in each hearing. Senate Judiciary Intellectual Property Subcommittee leaders Chris Coons (D-DE) and Thom Tillis (R-NC) have indicated they will prioritize this intersection later this year. During the regulatory principles hearing, several senators and witnesses argued that AI would be best regulated by an independent agency that could assess AI safety and invest in countermeasures against rogue AI, but others support leaving oversight of AI to existing agencies with sector-specific expertise. The Senate Judiciary Committee is expected to hold at least three more hearings on the subject. The House has held hearings on AI in modern warfareandfederal investments in AI research. Both House hearings highlighted Congress's desire to outcompete China for AI dominance, a threat downplayed by witnesses in the Senate, who suggested Chinese AI innovators cannot compete with American firms.

New Democrats to Focus on AI— The New Democrat Coalition (the Coalition), a group of nearly 100 House Democrats, announced the creation of an Artificial Intelligence Working Group, led by Chair Derek Kilmer (D-WA) and Vice Chairs Don Beyer (D-VA), Jeff Jackson (D-NC), Sara Jacobs (D-CA), Susie Lee (D-NV), and Haley Stevens (D-MI). In an interview, Rep. Kilmer indicated the Working Group will focus on combatting deepfakes, reducing AI-driven job displacement, and promoting member education on AI, among other topics. The Coalition's most recent prior involvement in AI policy was in 2019, when the Coalition endorsed a series of bills designed to protect personal information, study the impact of AI on the job market, and establish ethical guardrails for AI development.

Legislative Activity

National Defense Authorization Act Considers AI Proposals—The annual Defense policy bill, the National Defense Authorization Act (NDAA), includes several key AI-related provisions. Many of these provisions broadly seek to enhance coordination and cooperation on AI-related efforts within the DoD. The House bill directs DoD to define the unique responsibilities of its sub-organizations when implementing AI, while the Senate bill would establish a Digital and Artificial Intelligence Governing Council to coordinate and oversee AI capabilities. The Senate NDAA also includes a provision, led by Sen. Mark Warner (D-VA), requiring DoD to review and categorize the department's current investments in AI applications and research. The NDAA is considered a "must-pass" bill, and it will almost certainly be adopted by the end of 2023. These individual provisions will be up for debate as the House and Senate begin the conference process to resolve differences between their two bills this fall.

AI Innovation Spurs Legislative Activity— Beyond the NDAA, dozens of bills have been introduced recently related to AI, including the following notable bills:

  • Reps. Ted Lieu (D-CA), Ken Buck (R-CO), Anna Eshoo (D-CA), and Sen. Brian Schatz (D-HI) announced the introduction of the National AI Commission Act (H.R. 4223), which would create a "bipartisan, blue ribbon commission" to review the federal government's approach to regulating AI and provide broad recommendations for improvement, including new government structures.
  • Sens. Gary Peters (D-MI) and John Cornyn (R-TX) introduced the AI LeadershipTo Enable Accountable Deployment Act (S. 2293), which would establish a Chief AI Officer within every federal agency and also establish an interagency council made up of these AI officers to ensure coordination between the federal government on the use of AI systems. The bill was reported favorably out of the Senate Homeland Security and Government Affairs Committee in July.
  • Sens. Martin Heinrich (D-NM) and Todd Young (R-IN) and Reps. Anna Eshoo (D-CA) and Michael McCaul (R-TX) introduced the Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023 (H.R. 5077/ S. 2714). The bill would establish the National Artificial Intelligence Research Resource as a shared national research infrastructure to promote greater access to the data and resources needed to spur development of AI.
  • Senate Judiciary Intellectual Property Subcommittee leaders Thom Tillis (R-NC) and Chris Coons (D-DE) introduced the Patent Eligibility Restoration Act of 2023 (S. 2140), which clarifies that patent eligibility law applies to AI-generated ideas. The law explicitly excludes mental processes, unmodified human genes, and unmodified natural material from patent law eligibility.
  • Sens. Ed Markey (D-MA) and Ted Budd (R-NC) introduced two pieces of legislation to identify and mitigate AI-driven threats to public health. The Artificial Intelligence and Biosecurity Risk Assessment Act (S. 2399) would require the U.S. Department of Health and Human Services (HHS) to conduct an assessment and implement strategic initiatives to address whether advancements in AI technology could be used to develop pathogens or bioweapons. The Strategy for Public Health Preparedness and Response to Artificial Intelligence Threats Act (S. 2346) would require HHS to develop a strategy to respond to public health threats posed by AI-driven biological threats, including by leveraging AI technology itself.
  • Reps. Lori Chavez-DeRemer (R-OR), Darren Soto (D-FL), Lisa Blunt-Rochester (D-DE), and Andrew Garbarino (R-NY) introduced legislation (H.R. 4498) mandating a report on the impact of AI on the U.S. workforce. The Jobs of the Future Act would require the Department of Labor and National Science Foundation to analyze a variety of AI workforce impacts, including the impact of AI on job displacement, potential public-private partnerships on the topic, and the data required to evaluate the impact of AI on the U.S. workforce, among others.
  • Rep. Doris Matsui (D-CA) and Sen. Markey introduced the Algorithmic Justice and Online Platform Transparency Act (H.R. 4624/S. 2325) prohibiting algorithmic discrimination based on protected characteristics, establishing safety and effectiveness standards for algorithms, and requiring tech platforms to publish information about their algorithmic processes.

Sens. Richard Blumenthal (D-CT) and Josh Hawley (R-MO), leaders of the Senate Judiciary Privacy, Technology, and the Law Subcommittee, released a bipartisan framework for comprehensive AI legislation. The framework would establish an independent AI licensing regime, administered by a third-party oversight body, for sophisticated general-purpose AI models and high-risk applications; ensure redress from AI harms, including private rights of action and removal of any protections under Section 230 of the Communications Decency Act of 1996; mandate identification of deepfakes through watermarks or other technical means; impose data-privacy and transparency requirements; and provide a right to human review of high-risk or consequential AI decisions. The framework also calls for the use of export controls and sanctions to limit access to AI models and hardware by U.S. geopolitical rivals. The framework has not yet been introduced as a bill.

Now that Congress has returned from its August recess, the top legislative priority is addressing the twelve fiscal year (FY) 2024 appropriations bills and passing a continuing resolution (CR) to prevent a federal government shutdown before the current FY ends on September 30 . While Congress negotiates its federal funding packages and a CR, there continues to be a growing interest in developing a federal framework to respond to AIand other novel technologies, which impact a range of industries.

What to Watch Later This Year

Congress— As members of Congress develop more nuanced understandings and opinions of AI and its various policy implications, in-depth monitoring of legislative activity becomes increasingly important. While a divided Congress and a forthcoming battle over government funding will make passing sweeping AI legislation challenging (as indicated by Sen. Todd Young (R-IN) here), legislators will use the remainder of the year to lay the groundwork for proposals that will receive more serious consideration in 2024 and beyond. These proposals may include legislation to establish a content-licensing regime to prevent AI firms from training models on copyrighted content without permission.

The Senate will continue to host academic-, public-, and private-sector presentations in AI Insight Forums to continue the process of educating staff and members on issues in AI policy. The Senate has also turned its attention to AI in healthcare. In September, Senate Health, Education, Labor and Pensions Committee Ranking Member Bill Cassidy (R-LA) publishedawhite paper to gather feedback from stakeholders on the bipartisan ways Congress can leverage the benefits and mitigate the risks resulting from the modern use of AI in health, education, and workforce settings. Ranking Member Cassidy is requesting stakeholder feedback to be submitted by September 22, 2023. In August, Sen. Mark Warner (D-VA) sent a letter to Google expressing concern following reports that Google began providing Med-PaLM 2, a large language model designed to provide answers to medical questions, to U.S. hospitals for testing purposes. Sen. Warner also sent letterst o CEOs of several companies citing "disturbing reports" that the companies' AI models provided underage users "dangerous advice that may encourage and exacerbate eating disorders." He urged the firms to implement safeguards to prevent harmful recommendations. These actions could foreshadow additional congressional inquiry into these issues.

In contrast to Leader Schumer and some other senators, representatives of both parties plan to legislate in a piecemeal, not an omnibus, fashion. In addition, AI legislative developments in the House are expected to be slower than in the Senate. The Energy and Commerce Committee has prioritized passage of the yet-unintroduced American Data Privacy and Protection Act before considering AI legislation although it is expected to hold AI-related hearings later this year. Rep. Suzan DelBene (D-WA) advanced this position in an August op-ed, in which she underscored the importance of passing a federal data privacy standard before regulating AI broadly. Similarly, House Financial Services Chair Patrick McHenry (R-NC) is expected to turn his committee's focus towards AI in late 2023.

Copyright Office Seeks Comments on Policy Questions Related to the Application of Copyright Law to AI— In late August, the USCO issued a Notice of Inquiry seeking public comments on policy issues related to "(1) the use of copyrighted works to train AI models; (2) the copyrightability of material generated using AI systems; (3) potential liability for infringing works generated using AI systems; and (4) the treatment of generative AI outputs that imitate the identity or style of human artists." The USCO plans to use submitted comments to inform Congress on "the current state of the law," identify areas for legislative action, and inform USCO's own regulatory work regarding the application of copyright law to AI. USCO will accept written comments until October 18, 2023.

AI in Healthcare Causes Future Calls for Increased Oversight— During a Biotechnology Innovation Organization convention earlier this summer, U.S. Food and Drug Administration (FDA) Commissioner Robert Califf< a href="https://news.bloomberglaw.com/health-law-and-business/chatgpt-poses-new-regulatory-questions-for-fda-medical-industry" target="_blank" rel="noopener noreferrer">said certain language models used in generative AI, including ChatGPT, are transforming the development of new drugs and therapies which necessitates FDA to promulgate new regulations. Commissioner Califf claimed industry leaders "want to be regulated" in this space yet asserted that there are few "good suggestions" on how best to regulate this emerging technology. Similarly, during a Senate Finance Committee hearing held on June 8, 2023, titled "Consolidation and Corporate Ownership in Health Care: Trends and Impacts on Access, Quality, and Costs" the committee discussed a range of issues including high level of claims denials reported by Medicare Advantage plans, many of which are utilizing AI-based algorithms in their cost containment processes such as prior authorization.

FDA Seeks Comments on Using AI in Drug and Biologic Development— On May 10, 2023, FDA issued a discussion paper on using AI in the development of drugs and biological products. Main topics discussed in the paper include the landscape of current and potential uses of AI (e.g., drug target identification and prioritization, compound screening and design, and clinical trial applications); considerations for use of AI (including overarching standards and practices); and next steps and stakeholder engagement. The discussion paper is not FDA guidance or policy and does not endorse a specific AI use or approach in drug development. Rather, FDA describes the discussion paper as "an initial communication with stakeholders ... intended to promote mutual learning and discussion." FDA recognizes the increased use of AI throughout the drug development life cycle and its potential to accelerate the development of safe and effective drugs. The agency explains that it has seen a rapid growth in the number of drug and biological product applications that include AI (over 100 submissions in 2021). Per FDA, such submissions cut across a range of therapeutic areas, and the uses of AI within the submissions cover the many different areas of the drug development process — from drug discovery and clinical trial enrichment to endpoint assessment and post-market safety surveillance. FDA is soliciting feedback on the opportunities and challenges with utilizing AI in the development of drugs, as well as in the development of medical devices intended to be used with drugs. Comments on the discussion paper were due by August 9, 2023.

Limiting U.S. Investments in Chinese Military and Intelligence AI Systems— As part of President Biden's August 2023 Executive Orderlimiting outbound U.S. investments in "countries of concern" (China), the U.S. Department of the Treasury issued anAdvanced Notice of Proposed Rulemaking (ANPR) seeking public comment on its proposed Outbound Investment Program. Under the current version of the Outbound Investment Program, certain China-based investments in AI software used for cybersecurity, digital forensics tools, penetration testing tools, controlling robotic systems, surreptitious listening, noncooperative location tracking, or facial recognition would require notification to the U.S. Treasury Department, whereas such investments in AI tools for military, government intelligence, or mass-surveillance would be prohibited outright (see our recent Advisory for more detail). Interested parties may submit comments on the ANPR through September 28, 2023.

AI and Financial Services— Financial regulators are also expected to expand their oversight of AI in the financial system. In late June, the Office of the Comptroller of the Currency, Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation, National Credit Union Administration, Consumer Financial Protection Bureau, and Federal Housing Finance Agency published a notice of proposed rulemaking that would set quality control standards for automatic home appraisals (see our recentAdvisoryandblog post for more detail). Federal Reserve Vice Chair for Supervision Michael Barr separately underscored the importance of continued compliance with the Fair Housing Act and Equal Credit Opportunity Act as AI becomes more prevalent in the financial services industry. Barr also noted the Federal Reserve is considering updates to Community Investment Act regulations to accommodate AI technology (see our Advisory for additional perspectives). In August, the Securities and Exchange Commission (SEC) proposed rules to require broker dealers and investment advisers to identify and neutralize conflicts of interest stemming from their use of predictive data analytics, and to codify policies to maintain compliance with the proposal. The SEC will accept comments on the proposal until October 10 although industry parties have sought a longer comment period.

Other Expected Executive Actions— The administration is in the process of drafting an AI-focused Executive Order that will serve to guide executive agencies' approach to the technology. In addition, in August 2023, the Federal Election Commission unanimously advanced a petition to examine whether the agency should amend its regulations to make clear that "deliberatively deceptive" AI campaign advertisements are prohibited. Interested parties may submit comments through October 16, 2023. Later this year, the Federal Trade Commission is expected to seek comments on proposed rules governing commercial surveillance and data collection and likely regulating algorithmic discrimination and automated decision-making. Any proposal would build on last year's ANPR on these topics (see our Advisory for further details).

* Vincent Brown contributed to this Advisory. Vincent is a graduate of American University Washington College of Law and is employed at Arnold & Porter's Washington, D.C. office. Vincent is admitted in Kentucky and is not admitted to the practice of law in Washington, D.C.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.