In this article, the authors explain the five ethical principles presented by the White House Office of Science and Technology Policy's AI Bill of Rights, and how entities utilizing AI systems can ensure alignment of their practices with these principles.

Rapid advances in artificial intelligence (AI) have led to AI increasingly appearing not just in the headlines, but in our everyday lives, which in turn has led to heightened governmental interest in the technology and its implications. In October 2022, the White House Office of Science and Technology Policy (OSTP), with the support of President Biden, published "The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People" (the Blueprint), a framework for enacting AI ethics legislation.1 While the Blueprint itself is not law, it describes shortcomings of some modern AI systems, particularly in transparency and equality, and proposes five principles to serve as focal points for potential corrective legislation.

The Blueprint is accompanied by a "Technical Companion," which offers practical steps for communities and industries to actualize and ensure consistency with these principles.2 Together, the Blueprint and Technical Companion provide ethical principles that "allow technical innovation to flourish while protecting people from harm."3 Entities currently utilizing AI systems will benefit greatly from familiarizing themselves with the principles presented by the Blueprint and brainstorming how to best incorporate these principles into their existing AI systems, as these principles may be embodied in future legislation.

The Blueprint's Five Principles

The Blueprint focuses on potential impacts of AI systems on the rights of the American public. At the heart of the Blueprint rests the concern that AI, if left unregulated, could be "used to limit [the public's] opportunities and prevent . . . access to critical resources and services."4 The Blueprint and Technical Companion specifically focus on certain areas' use of AI, namely, health care, job recruitment, criminal justice, banking and lending, education, and social media. These areas were chosen based on a concern that the risk of perpetuating existing inequities or mishandling sensitive data may be higher for these specific areas than others.

The Blueprint provides five principles to guide the design and deployment of AI in ways that create transparent systems and prevent discrimination. The principles are primarily intended to provide guidance for those AI systems that "have the potential to meaningfully impact the American public's rights, opportunities or access to critical resources or services."5 For each principle, the Technical Companion provides suggestions on how the principle can be put into practice by entities utilizing AI systems. The principles, as well as the key concepts for implementation, are briefly described below.

Safe and Effective Systems

This principle is intended to protect the public by encouraging development teams to design their AI systems to protect against unintentional uses, including inappropriate or irrelevant data use. The Blueprint encourages engaging in appropriate testing to ensure AI systems only use data that is appropriate and relevant for the systems' development and deployment, and undertaking independent evaluations to assess the safety of these systems.

To an extent, this principle has already been enacted through the issuance of Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which sets out principles governing the federal government's use of AI systems and requires federal agencies utilizing AI systems to adhere to similar principles as provided for in the Blueprint.6

Algorithmic Discrimination Protections

Eric Lander, director of the OSTP, and Alondra Nelson, deputy director of the OSTP, believe that some unintentional discriminatory effects of AI systems, including the penalization of terms associated with women and minority groups on resumes, result from developers "not using appropriate data sets and not auditing systems comprehensively."7 A central goal of the Blueprint is to prevent discriminatory AI systems from being developed and deployed, and to achieve this, the Blueprint endorses taking proactive measures such as equity assessments, use of representative data and organizational oversight. The Blueprint encourages independent evaluations and plain language reporting in the form of publicly conducted algorithmic impact assessments whenever possible.

Data Privacy

The Blueprint encourages stricter implementation of data privacy processes by suggesting that only data necessary for specific contexts should be collected, and suggests that people have access to reporting that confirms their data decisions have been respected by the collecting entities. The Blueprint also suggests the implementation of consent withdrawal and data deletion policies.

Additionally, the Blueprint suggests extra protections for sensitive data and data used in particular areas—including health, employment, education, criminal justice and personal finance, but noting that information considered sensitive to the public changes over time—such as assurances that the information is sufficiently protected and used only in narrowly defined contexts. In determining what data is sensitive and thus in need of additional protections, the Blueprint notes that data and metadata are sensitive if they pertain to certain personal information about individuals (e.g., biometric data, geolocation data, Social Security numbers, etc.), or if they are used in a "sensitive domain," by which it can have a material harm on an individual or affect the individual's civil liberties.8 In regard to this sensitive data, the Blueprint recommends that the data be used for necessary functions only, be subject to ethical review and data quality checks, and that entities regularly provide public reports describing oversight procedures and any security lapses.

Notice and Explanation

To promote transparency, the Blueprint encourages developers to provide reports containing "generally accessible plain language documentation," including notice that AI systems are in use, descriptions of the systems' functions, and explanations of how they contribute to outcomes that may impact the public.9 Such notice and reporting is intended to guard against potential harms by providing individuals with the opportunity to find and correct errors and contest decisions made by AI systems. Several large companies have already integrated certain explainability tools into some of their AI systems.10

The Blueprint highlights the value of integrating existing notice and explainability processes into new AI systems. For example, lenders are required by federal law to notify consumers when they were denied credit based on their credit report and include an explanation of how their information was used to reach the denial. These existing notice and explanation processes are beneficial to the public, so the Blueprint suggests that they should be extended to the AI systems that are created for the credit review process, requiring such AI systems to also provide such notice and explanation.11

Human Alternatives, Consideration, and Fallback

The Blueprint suggests that individuals should be able to opt out of AI systems and have access to a human alternative to protect against potential flaws in AI systems that can potentially produce unintended outcomes and to quickly mitigate any harm that may occur. The Blueprint suggests that the ability to opt for a human alternative should be accessible, equitable, and effective, but should be limited to appropriate situations, as determined based on reasonable expectations in the context.

The Blueprint highlights the need for human fallback in areas such as criminal justice, education, employment, and health care, because preventing and mitigating potentially harmful outcomes in these areas is especially important. In the health care space, the Biden administration recently increased funding for "Navigators," that is, human professionals who help individuals look for public health care options and complete eligibility and enrollment forms.12 Additionally, many businesses already use a variation of human fallback in the form of AI customer service systems that include the opportunity to escalate service requests to a human representative.

The Blueprint and the Artificial Intelligence Risk Management Framework

Further clarification of the Blueprint has been requested by members of Congress in light of some conflicting information between it and another recently published AI guidance tool. In 2020, Congress directed the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce founded in 1901, to create the Artificial Intelligence Risk Management Framework (the Framework), a guidance and best practices document intended to promote trustworthy AI designs and implementation strategies, similar to the Blueprint.13 On January 20, 2023, Congressmen Frank Lucas, Chairman of the Committee on Science, Space, and Technology, and James Comer, Chairman of the Committee on Oversight and Accountability, sent a letter to the OSTP seeking clarification regarding guidance in the Blueprint that conflicts with the Framework.14 For example, the letter states that the Blueprint "adopts a different definition of AI" and "different principles on trustworthy AI."15

To date, the OSTP has yet to respond to the letter from Congressmen Lucas and Comer, but its response will provide further interpretation of existing gray areas for state and federal lawmakers, and provide entities utilizing AI systems with clearer instructions on how to update their systems, if needed. For now, entities using AI systems should be aware that there are some discrepancies between the Blueprint and the Framework that will likely be resolved in the coming months, but the documents generally share similar emphasis on creating transparent, secure and non-biased AI systems.

Implementation Strategies

The principles in the Blueprint are broad, as they are meant to provide basic guidelines to Congress should it seek to codify some form of these principles in future legislation. The Blueprint provides specific examples of how the principles can be used in practice, focusing on the areas listed above, but acknowledges that the application of the principles may not be appropriate in all circumstances, and as such, the measures taken by companies to comply with the principles of the Blueprint should be proportionate to the extent and nature of the potential harm.16 As the government's interest in the ethical use of AI systems increases, AI management companies have emerged to assist entities utilizing AI systems with risk management, system monitoring, and transparency.17 These types of risk management and monitoring ventures reflect companies' growing focus on ethical AI practices and embodiment of the principles in the Blueprint. Adherence to the principles in the Blueprint not only helps insulate entities using such AI systems from risk, but can also provide value to customers and aid with satisfaction and retention.18

The Blueprint signals this administration's desire for proactive ethical AI protections to be implemented by organizations, as well as the creation of subsequent legislation codifying these principles. The Blueprint is not legally binding; rather, its mission is to "support the development of policies and practices that protect civil rights and promote democratic values."19

Conclusion

The Blueprint is a product of a yearlong process of seeking input from impacted stakeholders, including technology developers, experts in the areas specified above and federal policy makers, which included panel discussions, review of email submissions, public listening sessions, and private meetings. The OSTP notes that this public input "played a central role in shaping the Blueprint,"20 so organizations would be wise to reflect internally on how their AI systems align with the principles set forth in the Blueprint and, if necessary, implement changes to create such alignment.

Footnotes

1. Off. of Sci. and Tech. Pol'y, Exec. Off. of the President, The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (Oct. 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

2. Id. at 3.

3. Id. at 9.

4. Id. at 3.

5. Id. at 8.

6. Exec. Order No. 13,960, 85 C.F.R. 78939 (2020).

7. Eric Lander & Alondra Nelson, Editorial, Americans Need a Bill of Rights for an AI-Powered World, Wired (Oct. 8, 2021, 8:00 am), https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/; supra note 1 (describing an automated hiring tool that "learned the features of a company's employees (predominantly men)" and "rejected woman applicants for spurious and discriminatory reasons," including penalizing resumes with the word "women's," such as "women's chess club captain," in the candidate ranking).

8. Supra note 1.

9. Id. at 43.

10. See Linh Ho, Responsible AI Comes of Age (And Customers Love It), Forbes (Feb. 21, 7:30 am), https://www.forbes.com/sites/forbescommunicationscouncil/2023/02/21/responsible-ai-comes-of-age-and-customers-love-it/ ("Microsoft and Amazon have launched explainability tools . . . , and Google, IBM and others now offer explainability tools and technologies that help [people] use AI data fairly and responsibly").

11. Supra note 1.

12. Id. at 52.

13. See Nat'l Inst. of Standards and Tech., U.S. Dep't of Com., Artificial Intelligence Risk Management Framework (AI RMF 1.0) (January 2023), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

14. See Letter from Frank D. Lucas, Chair, Com. of Sci., Space and Tech., and James Comer, Chair, Com. on Oversight and Accountability, to Arati Prabhakar, Director, White House Off. of Sci. and Tech. Pol'y (Jan. 19, 2023), https://republicans-science.house.gov/_cache/files/7/1/71fd9ec7-1450-4290-b2ea-f1bd4d74a6c2/550CFF8A4020B647043679FDF9D41CB9.2023-01-19-ostp-ai-bill-of-rights-letter.pdf.

15. Id.

16. See id. at 8.

17. See, e.g., Zach Winn, Helping companies deploy AI models more responsibly, MIT News (Feb. 10, 2023), https://news.mit.edu/2023/verta-helping-companies-deploy-ai-models-0210 (showcasing Verta, an example of a company designed to monitor and manage AI systems, created by MIT's Computer Science and Artificial Intelligence Laboratory); cf. Linh Ho, Responsible AI Comes of Age (And Customers Love It), Forbes (Feb. 21, 7:30 am) (explaining that a recent study found that 84% of insurtech customers were willing to pay more for companies using "responsible AI").

18. Ho, supra note 10.

19. Supra note 1.

20. Id. at 4.

Originally published by The Journal of Robotics, Artificial Intelligence and Law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.