Key takeaways

  • The National Institute of Standards and Technology published the Artificial Intelligence Risk Management Framework on January 26, 2023
  • The framework is intended to provide risk management guidelines that help organizations analyze the unique risks posed by artificial intelligence, including potential negative impact
  • The voluntary guidelines encourage organizations to bolster the trustworthiness of AI and present a four-step action plan for doing so: govern, map, measure and manage

On January 26, 2023, the US Commerce Department's National Institute of Standards and Technology (NIST) published the Artificial Intelligence Risk Management Framework (AI RMF). The AI RMF is a voluntary resource designed to aid a variety of actors in the artificial intelligence sphere – such as technology companies developing AI and life sciences entities deploying AI – in identifying and managing the risks of AI while promoting "trustworthy and responsible development and use of AI systems." The guidance is general, allowing any organization to adopt the framework. The AI RMF comes at a pivotal point for AI and asserts that bolstering trustworthiness and minimizing risk are the keys to a future of AI development without inequitable outcomes.

The AI RMF arises from the National Artificial Intelligence Initiative Act of 2020, where Congress first directed NIST to develop the framework. The AI RMF comes after 18 months of development, with input from more than 240 different organizations in 400 formal comments on draft versions.

The framework

The AI RMF is divided into two parts, each addressing high-level risk management principles. The framework is broad and applies across sectors and points in the AI life cycle. However, it is also rights-protective, inspiring organizations to examine the impact of AI on society. Part 1 sets the stage for the overall framework, defining AI trustworthiness and framing the narrative of AI risk as comprehensive and multidimensional. Part 2 highlights the framework across four tenets: govern, map, measure and manage.

Part 1: Foundational information

Part 1 asks AI actors to focus on minimizing negative impacts and bolstering positive effects. It highlights that:

  • Harm can impact individuals, organizations and ecosystems.
  • AI has several sources of risk, including a lack of reliable metrics, lack of transparency, third-party involvement and issues applying AI to human characteristics.
  • Risk tolerance and prioritization are often different across perspectives, creating less uniformity and potential issues throughout the AI life cycle.
  • AI risks must be considered alongside other contexts rather than in isolation. Some areas of considerable overlap are cybersecurity, privacy and environmental impact.

The hallmark of Part 1 is the definition of AI trustworthiness. The AI RMF encourages AI actors to examine several key characteristics of trustworthiness for AI. The framework views AI as trustworthy where it is:

  • Valid and reliable.
  • Safe.
  • Secure and resilient.
  • Accountable and transparent.
  • Explainable and interpretable.
  • Privacy-enhanced.
  • Fair with harmful bias managed.

These characteristics of trustworthy AI systems are not all equally weighted, and some are in tension with others (e.g., predictive accuracy and interpretability). According to NIST, "Creating trustworthy AI requires balancing each of these characteristics based on the AI system's context of use."

Part 2: Core pillars and profiles

Part 2 details the four core pillars of the AI RMF: govern, map, measure and manage. The four pillars are designed to work together to create an overall plan for managing AI risk.

  • Govern addresses the need for clear processes around AI risk management. At this stage, AI actors are tasked with creating procedures and policies.
  • Map is the stage where AI actors gather information to understand the context of the AI, the circumstances of its use and its possible impact. The focus of mapping is interdisciplinary and expansive. For example, actors may work to understand the limitations of AI or test assumptions of AI use.
  • Measure encompasses the variety of methods used to analyze and monitor AI risks and related impact. This includes "tracking metrics for trustworthy characteristics, social impact, and human-AI configurations." Measurement further aids the other tenets, documenting how decisions are informed and in what context.
  • Manage is the final core pillar and involves directly responding to the risks identified throughout the process.

Finally, the AI RMF highlights the utility of different profiles, which enable various types of organizations to navigate similar contexts with more ease. For instance, a hiring profile would provide more nuanced information on the legal and societal implications of AI in the hiring context. Although the profiles are not included in the framework, the concept provides additional insight into how actors can analyze AI risk.

Looking forward

The AI RMF is another brick in the road to regulation of AI in the US. While the European Union is much further down that regulatory road with draft legislation in the form of The Artificial Intelligence Act, the increasingly frequent adoption of AI in the US will continue to attract the attention of Congress and federal agencies, such as the Commerce Department.

NIST has committed to developing additional resources as companions to the AI RMF. The most important supplementary resource, the AI RMF Playbook, was published with the framework and gives detailed ideas on how to implement the high-level principles set forth in the RMF. The playbook follows the four core pillars outlined in the RMF and expands on the specific principles highlighted throughout. Each entry includes a rationale for the principle, suggested actions, ideas for transparency and documentation, and further resources. For example, the AI RMF outlines the Govern 1.2 principle as, "The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures." The playbook expands on this concept and suggests defining terms and testing incident response plans.

Like the RMF, the playbook is only in its first version. NIST plans to implement comments on the playbook in a forthcoming version planned for spring 2023. Similarly, NIST plans to launch the Trustworthy and Responsible AI Resource Center to further assist organizations in implementing the AI RMF.

Companies looking to be at the forefront of technology can look to the AI RMF as a guide to build and deploy trustworthy AI. While the possibilities of AI are uncertain, there is no doubt that more guidance and regulation are yet to come.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.