On January 26, 2023, the U.S. Department of Commerce's National Institute of Standards and Technology ("NIST") released its highly anticipated AI Risk Management Framework 1.0 ("AI RMF"), a significant resource for organizations to use when designing, developing, deploying, or using artificial intelligence ("AI") systems. With the Government of Canada's proposed Artificial Intelligence and Data Act ("AIDA")in Bill C-27 working its way through Parliament (as addressed in our previous bulletin), NIST's AI RMF is set to be a potentially influential tool for assessing, managing, and mitigating risks as organizations look to engage AI systems in a responsible manner.

The Structure of the AI RMF

The AI RMF is divided into two parts:

  • Part 1 ("Foundational Information") sets out AI-related risks, impacts and harms, and outlines seven presumptively "trustworthy AI characteristics": (i) validity and reliability; (ii) safety; (iii) security and resilience; (iv) accountability and transparency; (v) explainability and interpretability; (vi) privacy-enhanced; and (vii) fairness – with harmful bias managed.
  • Part 2 ("Core and Profiles") describes the AI RMF's "Core," which are four functions (Govern, Map, Measure, Manage), along with the categories of these functions that would assist organizations in practically addressing AI system risks.

Evaluating AI Risk and Trustworthiness (Part 1 – Framing Risk)

Part 1 identifies the AI RMF's intended audience as "AI actors . . . who perform or manage the design, development, deployment, evaluation, and acquisition of AI systems and drive AI risk management efforts." The AI RMF defines"risk" to mean "the composite measure of an event's probability of occurring and the magnitude or degree of the consequences of the corresponding event." NIST describes enhanced AI system "trustworthiness" as the remedy to negative AI risks.

Part 1 sets out the seven characteristics that may be used to evaluate an AI system's trustworthiness:

  1. Valid and Reliable: "Validation is the confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled... Reliability [is the] ability of an item to perform as required, without failure, for a given time interval, under given conditions."
  2. Safe: Safe AI systems should "not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered... Different types of safety risks may require tailored AI risk management approaches based on context and the severity of potential risks presented. Safety risks that pose a potential risk of serious injury or death call for the most urgent prioritization and most thorough risk management process."
  3. Secure and Resilient: An AI system is resilient if it can "withstand unexpected adverse events or unexpected changes in their environment or use – or if they can maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary... Security and resilience are related but distinct characteristics. While resilience is the ability to return to normal function after an unexpected adverse event, security includes resilience but also encompasses protocols to avoid, protect against, respond to, or recover from attacks."
  4. Accountable and Transparent: "Trustworthy AI depends upon accountability, [which] presupposes transparency. Transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system – regardless of whether they are even aware that they are doing so...By promoting higher levels of understanding, transparency increases confidence in the AI system."
  5. Explainable and Interpretable: "Explainability refers to a representation of the mechanisms underlying AI systems' operation, whereas interpretability refers to the meaning of AI systems' output in the context of their designed functional purposes."
  6. Privacy-Enhanced: "Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals' agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation)... Privacy values such as anonymity, confidentiality, and control generally should guide choices for AI system design, development, and deployment."
  7. Fair – with Harmful Bias Managed: "Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination.... While bias is not always a negative phenomenon, AI systems can potentially increase the speed and scale of biases and perpetuate and amplify harms to individuals, groups, communities, organizations, and society"

Managing AI Risk (Part 2 – Core and Profiles)

Part 2 presents a vision for how the AI RMF "Core" can assist organizations in responsibly managing AI risks in practice. This Core contains four distinct "Functions":

  • Govern: Governance is described as a cross-cutting Function that is infused throughout AI risk management. Like in any organization, strong and effective governance "can drive and enhance internal practices and norms to facilitate organizational risk culture. Governing authorities can determine the overarching policies that direct an organization's mission, goals, values, culture, and risk tolerance."
  • Map: Mapping provides a method for contextualizing and identifying AI system risks, allowing for the tracking of interdependencies between activities and among the relevant AI actors in order to reliably anticipate impacts of AI systems.
  • Measure: Measuring allows organizations to track and analyze identified AI risks and evaluate system trustworthiness using both qualitative and quantitative indicators.
  • Manage: Managing risk flows from the other Functions and entails allocating resources to mapped and measured risks on a regular basis and as defined by the govern function.

Implications for AIDA and Canadian Regulation of AI

As a direct product of the U.S. National Artificial Intelligence Initiative Act of 2020, the AI RMF represents an important step in the promotion of U.S. AI strategy and leadership, with broader implications for other jurisdictions exploring AI governance. As discussed in our previous bulletin, the European Union's (EU) Artificial Intelligence Act would comprehensively regulate the development, marketing, and use of AI, with global implications for the AI industry if passed. For Canada, AIDA is similarly not yet law and may undergo changes through the legislative process, but its current proposed standards generally reflect those in the AI RMF and emerging AI regulatory frameworks being considered in major economies around the world such as the EU.

The AI RMF offers guidance that may influence the development of AI regulatory frameworks and also assist designers, developers, and users of AI systems with operationalizing risk mitigation in advance of potentially being subject to laws regulating AI. Organizations involved with AI systems should review the AI RMF to assess its applicability and utility in light of these proposed laws regulating AI, both in Canada and around the world.

Although the AI RMF is a voluntary framework, NIST is generally considered to be an influential organization in the global development of technology standards, and so it may become widely referenced in the course of the design, development and deployment of AI systems. For example, many public sector entities in Canada (particularly federal government entities) require their contractors and subcontractors to be NIST-compliant; organizations involved in AI systems that contract or subcontract with these entities must take note of the AI RMF.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.