The second quarter of 2019 saw a surge in debate about the role of governance in the AI ecosystem and the gap between technological change and regulatory response. This trend was manifested in particular by calls for regulation of certain "controversial" AI technologies or use cases, in turn increasingly empowering lawmakers to take fledgling steps to control the scope of AI and automated systems in the public and private sectors. While it remains too soon to herald the arrival of a comprehensive federal regulatory strategy in the U.S., there have been a number of recent high-profile draft bills addressing the role of AI and how it should be governed at the federal level, while state and local governments are already pressing forward with concrete legislative proposals regulating the use of AI.

As we have previously observed, over the past year lawmakers and government agencies have sought to develop AI strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies while encouraging positive innovation and competitiveness.1 Now, for the first time, we are seeing federal, state and local government agencies show a willingness to take concrete positions on that spectrum, resulting in a variety of policy approaches to AI regulation—many of which eschew informal guidance and voluntary standards and favor outright technology bans. We should expect that high-profile or contentious AI use cases or failures will continue to generate similar public support for, and ultimately trigger, accelerated federal and state action.2 For the most part, the trend in favor of more individual and nuanced assessments of how best to regulate AI systems specific to their end uses of regulators in the U.S. has been welcome. However, even so there is an inherent risk that reactionary legislative responses will result in a disharmonious, fragmented national regulatory framework. In any event, from a regulatory perspective, these developments will undoubtedly yield important insights into what it means to govern and regulate AI—and whether "some regulation" is better than "no regulation"—over the coming months.

Table of Contents

I. Key U.S. Legislative and Regulatory Developments

II. Bias and Technology Bans

III. Healthcare

IV. Autonomous Vehicles

I. Key U.S. Legislative and Regulatory Developments

As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the House introduced Resolution 153 in February 2019, with the intent of "[s]upporting the development of guidelines for ethical development of artificial intelligence" and emphasizing the "far-reaching societal impacts of AI" as well as the need for AI's "safe, responsible, and democratic development."3 Similar to California's adoption last year of the Asilomar Principles4 and the OECD's recent adoption of five "democratic" AI principles,5 the House Resolution provides that the guidelines must be consonant with certain specified goals, including "transparency and explainability," "information privacy and the protection of one's personal data," "accountability and oversight for all automated decisionmaking," and "access and fairness."

Moreover, on April 10, 2019, U.S. Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) introduced the "Algorithmic Accountability Act," which "requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans."6 Rep. Yvette D. Clarke (D-NY) introduced a companion bill in the House.[7 The bill stands to be the United States Congress's first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific activity, such as the use of autonomous vehicles. While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington's stance amid growing public awareness of AI's potential to create bias or harm certain groups. Although the bill still faces an uncertain future, if it is enacted, businesses would face a number of challenges, not least significant uncertainty in defining and, ultimately, seeking to comply with the proposed requirements for implementing "high risk" AI systems and utilizing consumer data, as well as the challenges of sufficiently explaining to the FTC the operation of their AI systems. Moreover, the bill expressly states that it does not preempt state law—and states that have already been developing their own consumer privacy protection laws would likely object to any attempts at federal preemption—potentially creating a complex patchwork of federal and state rules.[8

In the wake of HR 153 and the Algorithmic Accountability Act, several strategy announcements and federal bills have been introduced, focusing on AI strategy, investment and fair use and accountability.[9 While the proposed legislation remains in its early stages, the recent flurry of activity is indicative of the government's increasingly bold engagement with technological innovation and the regulation of AI, and companies operating in this space should remain alert to both opportunities and risks arising out of federal legislative and policy developments—particularly the increasing availability of public-private partnerships—during the second half of 2019.

A. The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update

Three years after the release of the initial National Artificial Intelligence Research and Development Strategic Plan, in June 2019 the Trump administration issued an update—previewed in the administration's February 2019 executive order10—bringing forward the original seven focus areas and adding an eighth: public-private partnerships.11 Highlighting the benefits of strategically leveraging resources, including facilities, datasets, and expertise, to advance science and engineering innovations, the update notes:

Government-university-industry R&D partnerships bring pressing real-world challenges faced by industry to university researchers, enabling "use-inspired research"; leverage industry expertise to accelerate the transition of open and published research results into viable products and services in the marketplace for economic growth; and grow research and workforce capacity by linking university faculty and students with industry representatives, industry settings, and industry jobs.

Companies interested in exploring the possibility of individual collaborations or joint programs advancing precompetitive research should consider whether they have relevant expertise in any of the areas in which federal agencies are actively pursuing public-private partnerships, including the DoD's Defense Innovation Unit and the Department of Health and Human Services.12 The updated plan also highlights what progress federal agencies have made with respect to the original seven focus areas:

  • Make long-term investments in AI research.
  • Develop effective methods for human-AI collaboration.
  • Understand and address the ethical, legal and societal implications of AI.
  • Ensure the safety and security of AI systems.
  • Develop shared public datasets and environments for AI training and testing.
  • Measure and evaluate AI technologies through standards and benchmarks.
  • Better understand the national AI R&D workforce needs.

B. NIST Federal Engagement in AI Standards

The U.S. Department of Commerce's National Institute of Standards and Technology ("NIST") is seeking public comment on a draft plan for federal government engagement in advancing AI standards for U.S. economic and national security needs ("U.S. Leadership in AI: Plan for Federal Engagement in Developing Technical Standards and Related Tools"). The plan recommends four actions: bolster AI standards-related knowledge, leadership and coordination among federal agencies; promote focused research on the "trustworthiness" of AI; support and expand public-private partnerships; and engage with international parties.13 The draft was published on July 2, 2019 in response to the February 2019 Executive Order that directed federal agencies to take steps to ensure that the U.S. maintains its leadership position in AI.14 The draft plan was developed with input from various stakeholders through a May 1 Request for Information,15 a May 30 workshop16 and federal agency review.

C. AI in Government Act

House Bill 2575 and its corresponding bi-partisan Senate Bill 3502 (the "AI in Government Act")—which would task federal agencies with exploring the implementation of AI in their functions and establishing an "AI Center of Excellence,"—were first introduced in September 2018, and reintroduced in May 2019.17 The center would be directed to "study economic, policy, legal, and ethical challenges and implications related to the use of artificial intelligence by the Federal Government" and "establish best practices for identifying, assessing, and mitigating any bias on the basis of any classification protected under Federal non-discrimination laws or other negative unintended consequence stemming from the use of artificial intelligence systems."

One of the sponsors of the bill, Senator Brian Schatz (D-HI), stated that "[o]ur bill will bring agencies, industry, and others to the table to discuss government adoption of artificial intelligence and emerging technologies. We need a better understanding of the opportunities and challenges these technologies present for federal government use and this legislation would put us on the path to achieve that goal."18 Although the bill is aimed at improving the implementation of AI by the federal government, there are likely to be opportunities for industry stakeholders to participate in discussions surrounding best practices.

D. Artificial Intelligence Initiative Act

On May 21, 2019, U.S. Senators Rob Portman (R-OH), Martin Heinrich (D-NM), and Brian Schatz (D-HI) proposed legislation to allocate $2.2 billion over the next five years to develop a comprehensive national AI strategy to accelerate research and development in order to match other global economic powers like China, Japan, and Germany.19 S. 1558 (the "Artificial Intelligence Initiative Act") would create three new bodies: a National AI Coordination Office (to coordinate legislative efforts), a National AI Advisory Committee (consisting of experts on a wide range of AI matters), and an Interagency Committee on AI (to coordinate federal agency activity relating to research and education on AI).20 The bill also establishes the National AI Research and Development Initiative in order to identify and minimize "inappropriate bias and data sets algorithms." The requirement for NIST to identify metrics used to establish standards for evaluating AI algorithms and their effectiveness, as well as the quality of training data sets, may be of particular interest to businesses. Moreover, the bill requires the Department of Energy to create an AI research program, building state-of-the-art computing facilities that will be made available to private sector users on a cost-recovery basis.21

The draft legislation complements the formation of the bipartisan Senate AI Caucus in March 2019 by Senators Heinrich and Portman to address transformative technology with implications spanning a number of fields including transportation, health care, agriculture, manufacturing, and national security.22

E. FinTech

We have reported previously on the rapid adoption of AI by government agencies in relation to financial services.23 On May 9, 2019, Rep. Maxine Waters (D-CA) announced that the House Committee on Financial Services would launch two task forces focused on financial technology ("fintech") and AI:24 a task force on financial intelligence that will focus on the topics of regulating the fintech sector, and an AI task force that will focus on machine learning in financial services and regulation, emerging risks in algorithms and big data, combatting fraud and digital identification technologies, and the impact of automation on jobs in financial services.25

II. Bias and Technology Bans

As we reported in our Artificial Intelligence and Autonomous Systems Legal Update (1Q19), the topic of bias in AI decision-making has been at the forefront of policy discussions relating to the private sector for some time, and the deep learning community has responded with a wave of investments and initiatives focusing on processes designed to assess and mitigate bias and disenfranchisement26 at risk of becoming "baked in and scaled" by AI systems.27 Such discussions are now becoming more urgent and nuanced with the increased availability of AI decision-making tools allowing government decisions to be delegated to algorithms to improve accuracy and drive objectivity, directly impacting democracy and governance.28 Over the past several months, we have seen those discussions evolve into tangible and impactful regulations in the data privacy space and, notably, several outright technology bans.29 At a recent hearing of the Task Force, participants discussed selected issues facing certain U.S. and international regulatory agencies as well as certain regulator's efforts to engage with stakeholders in the fintech industry in order to consolidate and clarify communications, and inform policy.30


1 For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18); see also Ahmed Baladi, Gibson, Dunn & Crutcher LLP, Can GDPR Hinder AI Made in Europe? Cybersecurity Law Report (July 10, 2019), available at

2 See, for example, the House Intelligence Committee's hearing on Deepfakes and AI on June 13, 2019 (U.S. House of Representatives, Permanent Select Committee on Intelligence, Press Release: House Intelligence Committee To Hold Open Hearing on Deepfakes and AI (June 7, 2019)); see also Makena Kelly, Congress grapples with how to regulate deepfakes, The Verge (June 13, 2019), available at

3 H.R. Res. 153, 116th Cong. (1st Sess. 2019).

4 Assemb. Con. Res. 215, Reg. Sess. 2018-2019 (Cal. 2018) (enacted) (expressing the support of the legislature for the "Asilomar AI Principles"—a set of 23 principles developed through a collaboration between AI researchers, economists, legal scholars, ethicists and philosophers that met in Asilomar, California, in January 2017 and categorized into "research issues," "ethics and values," and "longer-term issues" designed to promote the safe and beneficial development of AI—as "guiding values for the development of artificial intelligence and of related public policy").

5 OECD Principles on AI (May 22, 2019) (stating that AI systems should benefit people, be inclusive, transparent, and safe, and their creators should be accountable), available at

6 Press Release, Cory Booker, Booker, Wyden, Clarke Introduce Bill Requiring Companies To Target Bias In Corporate Algorithms (Apr. 10, 2019), available at; see also S. Res. __, 116th Cong. (2019).

7 H.R. Res. 2231, 116th Cong. (1st Sess. 2019).

8 See Byungkwon Lim et al., A Glimpse into the Potential Future of AI Regulation, Law360 (April 10, 2019), available at

9 State legislatures have also recently weighed in on AI policy. In March 2019, Senator Ling Chang (R-CA) introduced Senate Joint Resolution 6, urging the president and Congress to develop a comprehensive AI advisory committee and to adopt a comprehensive AI policy, S.J. Res. 6, Reg. Sess. 2019–2020 (Cal. 2019); in Washington State, House Bill 1655 was introduced in February 2019, seeking to "protect consumers, improve transparency, and create more market predictability" by establishing guidelines for government procurement and use of automated decision systems, H.B. 1655, 66th Leg., Reg. Sess. 2019 (Wash. 2019).

10 For more information, please see our client alert President Trump Issues Executive Order on "Maintaining American Leadership in Artificial Intelligence."

11 Exec. Office of the U.S. President, The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update (June 2019), available at

12 Id. at 42.

13 NIST, U.S. Leadership in AI: a Plan For Federal Engagement in Developing Technical Standards and Related Tools – Draft For Public Comment (July 2, 2019), available .

14 Supra note 9. here

15 NIST, NIST Requests Information on Artificial Intelligence Technical Standards and Tools (May 1, 2019), available at

16 NIST, Federal Engagement in Artificial Intelligence Standards Workshop (May 30, 2019), available at

17 H.R. 2575, 116th Cong. (2019-2020); S. 3502 – AI in Government Act of 2018, 115th Cong. (2017-2018).

18 Press Release, Senator Brian Schatz, Schatz, Gardner Introduce Legislation To Improve Federal Government's Use Of Artificial Intelligence (September 2019), available at; see also Tajha Chappellet-Lanier, Artificial Intelligence in Government Act is back, with 'smart and effective' use on senators' minds (May 8, 2019), available at

19 S. 1558 – Artificial Intelligence Initiative Act, 116th Cong. (2019-2010); see further Khari Johnson, U.S. Senators propose legislation to fund national AI strategy, VentureBeat (May 21, 2019) available at

20 Matthew U. Scherer, Michael J. Lotito & James A. Paretti, Jr., Bipartisan Bill Would Create Artificial Intelligence Strategy for U.S. Workforce, Lexology (May 30, 2019), available at

21 Press Release, Senator Martin Heinrich, Heinrich, Portman, Schatz Propose National Strategy For Artificial Intelligence; Call For $2.2 Billion Investment In Education, Research & Development (May 21, 2019), available at

22 Press Release, Senator Martin Heinrich, Heinrich, Portman Launch Bipartisan Artificial Intelligence Caucus (Mar. 13, 2019), available at

23 For more information, please see our Artificial Intelligence and Autonomous Systems Legal Update (4Q18).

24 Katie Grzechnik Neill, Rep. Waters Announces Task Forces on Fintech and Artificial Intelligence (May 13, 2019), available at

25 See Scott Likens, How Artificial Intelligence Is Already Disrupting Financial Services, Barrons (May 16, 2019), available at

26 See also Kalev Leetaru, Why Do We Fix AI Bias But Ignore Accessibility Bias? Forbes (July 6, 2019), available at; Alina Tugend, Exposing the Bias Embedded in Tech, NY Times (June 17, 2019), available at

27 Jake Silberg & James Manyika, Tackling Bias in Artificial Intelligence (and in Humans), McKinsey Global Institute (June 2019), available at

28 Nicol Turner Lee, Paul Resnick & Genie Barton, Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms, Brookings Institute (May 22, 2019), available at

29 See also the French government's recent law, encoded in Article 33 of the Justice Reform Act, prohibiting anyone —especially legal tech companies focused on litigation prediction and analytics—from publicly revealing the pattern of judges' behavior in relation to court decisions, France Bans Judge Analytics, 5 Years In Prison For Rule Breakers, Artificial Lawyer (June 4, 2019), available at

30 U.S. H.R. Comm. on Fin. Servs., Overseeing the Fintech Revolution: Domestic and International Perspectives on Fintech Regulation (June 25, 2019), at 4, available at

To view the full article click here

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.