Although President Biden has begun overturning the previous administration's executive orders, many of those prior orders will survive. One such order appears to be Executive Order 13,960, "Promoting the Use of Trustworthy Artificial Intelligence ["AI"] in the Federal Government" (the "Order").1 The purpose of the Order is to provide guidance to federal agencies to ensure they "design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties and American values."2 The Order is intended to push forward the AI priorities of Executive Order 13,859,3 which implemented principles and objectives for federal agencies to rely on to drive American advancements in AI, and the 2020 memo from the Office of Management and Budget (the "OMB Memo"), which set out policy considerations to guide regulatory and non-regulatory approaches to AI applications developed and deployed outside the federal government.4

The Order does not provide sufficient direction regarding the results we want federal government AI to produce, but it will have direct and indirect effects on AI development and adoption in the public and private sectors. Below, I outline a few of those.

Federal Principles in the Process, But No Federal Values in the End Result

The OMB Memo provided 10 principles for federal agencies to use when developing-or declining to develop-regulations governing AI:

  1. Promote public trust in AI;
  2. Provide ample opportunities for the public to participate in and provide feedback on rulemaking governing AI;
  3. Leverage scientific and technical information and processes;
  4. Assess risks in subject AI;
  5. Consider the costs and benefits of any AI;
  6. Maintain a flexible approach to adapt to changes and updates to AI applications;
  7. Consider impacts AI may have on fairness and discrimination;
  8. Incorporate disclosure and transparency in the rulemaking process to increase public trust and confidence in AI applications;
  9. Promote AI systems that are safe, secure, and operate as intended; and
  10. Coordinate with other federal agencies on AI strategies.5

The Order seeks to impose similar (although not identical) principles on federal agencies as they design, develop, or acquire AI applications:

  • Lawful and "respectful of our Nation's values";
  • Purposeful and performance-driven;
  • Accurate, reliable, and effective;
  • Safe, secure, and resilient;
  • Understandable;
  • Responsible and traceable;
  • Regularly monitored;
  • Transparent; and
  • Accountable.6

In general, although the OMB Memo prioritizes innovation and growth and the Order prioritizes building trust in the federal government's AI applications, both do so by requiring development processes that balance similar, competing interests (i.e., the principles): accuracy, safety, transparency, accountability, well-reviewed, etc. This is done on a case-by-case basis.

The principles in the Order (and the OMB Memo) therefore provide a checklist for AI designers and developers to measure their application development processes against. Private companies that provide AI services and applications to the federal government will need to incorporate the principles into their development cycles and be able to demonstrate them. These requirements will likely necessitate significant research, development, and marketing investment in order to properly appeal to the federal government as a customer. For companies that market AI to both Washington and private companies, that investment is likely to influence its development of private-sector AI as well. Depending on how well known the Order's principles become, consumers and business clients may also start to expect AI designers to incorporate the Order's principles into the development of private-sector AI. Although I respect the need to apply the principles above on a case-by-case basis, I hope that the Biden administration lifts its eyes to see the forest for the trees, moving beyond imposing principles on the development of AI regulations and applications to the impact those regulations and applications have in the world. Will Americans experience less discrimination because of AI? Will AI materially improve their economic status or quality of life? Will they feel they have more oversight of the forces in their lives because of AI? These are the big picture questions that lawmakers and policymakers should be forced to consider when adopting AI regulations and applications.7

To view the full article please click here.

* John Frank Weaver, a member of McLane Middleton's privacy and data security practice group, is a member of the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law and writes its "Everything Is Not Terminator" column. Mr. Weaver, who may be contacted at john.weaver@ mclane.com, has a diverse technology practice that focuses on information security, data privacy, and emerging technologies, including artificial intelligence, self-driving vehicles, and drones.

Footnotes

1. Exec. Order No. 13,960, 85 Fed. Reg. 78939 (December 3, 2020), available at https://www.federalregister.gov/documents/2020/12/08/2020-27065/ promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federalgovernment (the "Order").

2. Id., at Sec. 1.

3. Exec. Order No.13,859, 84 Fed. Reg. 3967 (February 14, 2019), available at https://www.federalregister.gov/documents/2019/02/14/2019-02544/ maintaining-american-leadership-in-artificial-intelligence; see John Frank Weaver, "Everything Is Not Terminator: What Does the Executive Order Calling for Artificial Intelligence Standards Mean for AI Regulation?," The Journal of Artificial Intelligence & Law (Vol. 2, No. 5; September-October 2019), 373-379.

4. Office of Management and Budget Memorandum, Guidance for Regulation of Artificial Intelligence Applications (November 17, 2020), available at https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf ("OMB Memo").

5. Id., at 3-7. The OMB Memo's conflation of regulatory and nonregulatory approaches to governing AI is problematic, as it could limit the federal government's ability to make qualitative decisions about outcomes and benefits concerning AI and slow down needed AI regulation. See John Frank Weaver, "Everything Is Not Terminator: The White House Memo on Regulating AI Addresses Values but Not the Playing Field," The Journal of Artificial Intelligence & Law (Vol. 3, No. 3; May-June 2020) (describing how the draft memo predating the OMB Memo overemphasizes growth and innovation at the expense of the government's ability to timely and effectively regulate AI).

6. Order, supra note 1, at Sec. 3.

7. See John Frank Weaver, "Everything Is Not Terminator: Value-Based Regulation of Artificial Intelligence," The Journal of Artificial Intelligence & Law (Vol. 2, No. 3; May-June 2019), 219-226 ("We need to regulate AI now in order to set early expectations for AI developers: what should consumers reasonably expect, what processing behavior is acceptable, what information must be disclosed, etc.").

Published in The Journal of Robotics, Artificial Intelligence & Law (May-June 2021)

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.