Skip to content

NIST AI RMF

The NIST AI Risk Management Framework 1.0 (NIST AI RMF) provides a voluntary framework for managing AI risks across four core functions: GOVERN, MAP, MEASURE, and MANAGE.

Rulestatus encodes documentation and evidence requirements from all four functions. Checks apply to actor: provider regardless of EU AI Act risk level.

Total assertions18
Critical10
Major6
Minor2

AI risk management policy is documented

SeverityApplies to
CRITICALactor: provider

GOVERN 1.1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are established, transparent, and implemented effectively.

How to fix: Create docs/ai-rmf/ai-risk-policy.yaml with: scope, governance_structure, risk_management_commitment, approved_by.


Organizational AI risk tolerance is defined

SeverityApplies to
CRITICALactor: provider

GOVERN 1.5: Organizational risk tolerances are established, communicated, and maintained. Teams understand the organization’s risk tolerance for AI systems.

How to fix: Add a risk_tolerance field to your AI risk policy document defining acceptable risk thresholds.


Roles and responsibilities for AI risk management are assigned

SeverityApplies to
CRITICALactor: provider

GOVERN 2.1: Roles and responsibilities and organizational accountability for teams that design, develop, deploy, evaluate, and monitor AI systems are documented.

How to fix: Create docs/ai-rmf/ai-roles.yaml defining: roles, responsibilities, and accountable individuals for AI risk management.


AI risk and benefits are communicated to relevant stakeholders

SeverityApplies to
MAJORactor: provider

GOVERN 4.1: Organizational teams are committed to a culture that considers and communicates AI risk and its potential impact on people. AI risk information is regularly communicated to relevant stakeholders.

How to fix: Document your AI risk communication plan: who is informed, how often, and through what channel.


Third-party AI risk management policies are established

SeverityApplies to
MINORactor: provider

GOVERN 6.1: Policies and procedures are in place to assess the provenance of AI models, data, and third-party components used by the organization.

How to fix: Add a third_party_policy or supply_chain_risk field to your AI risk policy or create a separate vendor risk document.


AI system context, intended use, and deployment environment are documented

SeverityApplies to
CRITICALactor: provider

MAP 1.1: Context is established for the AI risk assessment. Factors affecting the intended use and deployment context of the AI system are documented.

How to fix: Create docs/ai-rmf/system-context.yaml with: intended_use, deployment_context, affected_populations, known_use_cases.


External factors and dependencies affecting AI risk are documented

SeverityApplies to
MAJORactor: provider

MAP 2.2: Scientific findings, context, and external factors that may affect AI risk and the organization’s ability to manage it are documented.

How to fix: Add external_factors or dependencies to your system context or risk assessment document.


AI system tasks, capabilities, and limitations are documented

SeverityApplies to
CRITICALactor: provider

MAP 3.1: AI tasks and the capabilities and limitations of AI systems are documented. Known limitations are communicated to relevant AI actors.

How to fix: Add capabilities and limitations fields to your technical documentation or system context document.


Likelihood and impact of AI risks are assessed

SeverityApplies to
MAJORactor: provider

MAP 5.1: Likelihood and impact of AI risks are assessed for each identified risk. Risk levels reflect the combination of likelihood and impact.

How to fix: Ensure each risk entry in your risk assessment includes likelihood, impact, and risk_level fields.


Criteria for evaluating AI risk are defined

SeverityApplies to
CRITICALactor: provider

MEASURE 1.1: Approaches and criteria to evaluate the trustworthiness of AI systems are established. The criteria align with risk categories identified during the Map function.

How to fix: Add evaluation_criteria or trustworthiness_criteria to your AI risk assessment or create a separate evaluation framework document.


AI system performance is evaluated and results documented

SeverityApplies to
CRITICALactor: provider

MEASURE 2.2: Scientific and technical evaluations of AI system functionality and performance are documented. Evaluation methods are appropriate for the intended context.

How to fix: Add performance_metrics to your model card or technical documentation with evaluation results.


Fairness and bias metrics are documented across population groups

SeverityApplies to
CRITICALactor: provider

MEASURE 2.3: AI system performance or improvement is evaluated against benchmarks. Benchmarks include group fairness metrics where relevant.

How to fix: Add per-group performance metrics to your bias assessment. Include group_metrics with results per population subgroup.


AI system security and adversarial robustness are evaluated

SeverityApplies to
MAJORactor: provider

MEASURE 2.7: AI system security and resilience — including adversarial robustness — are evaluated and documented. Results are considered in risk treatment decisions.

How to fix: Document security evaluation results in docs/security/ or docs/ai-rmf/ including adversarial testing findings.


Risks from third-party AI components and data are documented

SeverityApplies to
MINORactor: provider

MEASURE 2.9: The AI system architecture, including AI components and third-party components, is documented and known. Risks from third-party use are considered.

How to fix: Add third_party_components or supply_chain_risks to your technical documentation.


AI risks are prioritized and treatment decisions are documented

SeverityApplies to
CRITICALactor: provider

MANAGE 1.1: A risk treatment plan is established for AI risks. Identified risks are prioritized and treatment decisions — accept, mitigate, transfer, or avoid — are documented.

How to fix: Create docs/ai-rmf/risk-treatment-plan.yaml with: prioritized_risks, treatment_decisions (accept/mitigate/transfer/avoid), owners, target_dates.


Residual risks after treatment are identified and accepted

SeverityApplies to
MAJORactor: provider

MANAGE 1.3: Responses to the AI risks deemed highest priority are developed, planned, and documented. Residual risks are identified and documented.

How to fix: Add residual_risks field to your risk treatment plan listing risks that remain after mitigation.


AI incident response and escalation procedures are documented

SeverityApplies to
CRITICALactor: provider

MANAGE 2.2: Mechanisms are established to inventory AI systems and report failures, unexpected outcomes, and other impacts that may affect the organization or individuals.

How to fix: Create docs/ai-rmf/incident-response.yaml or config/incident_response.yaml with: reporting_process, escalation_path, response_timeline.


Deployed AI system is monitored for performance and risk

SeverityApplies to
MAJORactor: provider

MANAGE 2.4: Deployed AI systems are monitored for performance and outcomes. Feedback is used to improve risk management processes.

How to fix: Document your production monitoring plan with: monitored_metrics, alert_thresholds, review_frequency, feedback_loop.