NIST AI RMF
The NIST AI Risk Management Framework 1.0 (NIST AI RMF) provides a voluntary framework for managing AI risks across four core functions: GOVERN, MAP, MEASURE, and MANAGE.
Rulestatus encodes documentation and evidence requirements from all four functions.
Checks apply to actor: provider regardless of EU AI Act risk level.
Summary
Section titled “Summary”| Total assertions | 18 |
| Critical | 10 |
| Major | 6 |
| Minor | 2 |
Assertions by Function
Section titled “Assertions by Function”Function GOVERN 1.1
Section titled “Function GOVERN 1.1”ASSERT-NIST-AIRMF-GV-001-01
Section titled “ASSERT-NIST-AIRMF-GV-001-01”AI risk management policy is documented
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
GOVERN 1.1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are established, transparent, and implemented effectively.
How to fix: Create docs/ai-rmf/ai-risk-policy.yaml with: scope, governance_structure, risk_management_commitment, approved_by.
Function GOVERN 1.5
Section titled “Function GOVERN 1.5”ASSERT-NIST-AIRMF-GV-002-01
Section titled “ASSERT-NIST-AIRMF-GV-002-01”Organizational AI risk tolerance is defined
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
GOVERN 1.5: Organizational risk tolerances are established, communicated, and maintained. Teams understand the organization’s risk tolerance for AI systems.
How to fix: Add a risk_tolerance field to your AI risk policy document defining acceptable risk thresholds.
Function GOVERN 2.1
Section titled “Function GOVERN 2.1”ASSERT-NIST-AIRMF-GV-003-01
Section titled “ASSERT-NIST-AIRMF-GV-003-01”Roles and responsibilities for AI risk management are assigned
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
GOVERN 2.1: Roles and responsibilities and organizational accountability for teams that design, develop, deploy, evaluate, and monitor AI systems are documented.
How to fix: Create docs/ai-rmf/ai-roles.yaml defining: roles, responsibilities, and accountable individuals for AI risk management.
Function GOVERN 4.1
Section titled “Function GOVERN 4.1”ASSERT-NIST-AIRMF-GV-004-01
Section titled “ASSERT-NIST-AIRMF-GV-004-01”AI risk and benefits are communicated to relevant stakeholders
| Severity | Applies to |
|---|---|
| MAJOR | actor: provider |
GOVERN 4.1: Organizational teams are committed to a culture that considers and communicates AI risk and its potential impact on people. AI risk information is regularly communicated to relevant stakeholders.
How to fix: Document your AI risk communication plan: who is informed, how often, and through what channel.
Function GOVERN 6.1
Section titled “Function GOVERN 6.1”ASSERT-NIST-AIRMF-GV-005-01
Section titled “ASSERT-NIST-AIRMF-GV-005-01”Third-party AI risk management policies are established
| Severity | Applies to |
|---|---|
| MINOR | actor: provider |
GOVERN 6.1: Policies and procedures are in place to assess the provenance of AI models, data, and third-party components used by the organization.
How to fix: Add a third_party_policy or supply_chain_risk field to your AI risk policy or create a separate vendor risk document.
Function MAP 1.1
Section titled “Function MAP 1.1”ASSERT-NIST-AIRMF-MP-001-01
Section titled “ASSERT-NIST-AIRMF-MP-001-01”AI system context, intended use, and deployment environment are documented
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
MAP 1.1: Context is established for the AI risk assessment. Factors affecting the intended use and deployment context of the AI system are documented.
How to fix: Create docs/ai-rmf/system-context.yaml with: intended_use, deployment_context, affected_populations, known_use_cases.
Function MAP 2.2
Section titled “Function MAP 2.2”ASSERT-NIST-AIRMF-MP-002-01
Section titled “ASSERT-NIST-AIRMF-MP-002-01”External factors and dependencies affecting AI risk are documented
| Severity | Applies to |
|---|---|
| MAJOR | actor: provider |
MAP 2.2: Scientific findings, context, and external factors that may affect AI risk and the organization’s ability to manage it are documented.
How to fix: Add external_factors or dependencies to your system context or risk assessment document.
Function MAP 3.1
Section titled “Function MAP 3.1”ASSERT-NIST-AIRMF-MP-003-01
Section titled “ASSERT-NIST-AIRMF-MP-003-01”AI system tasks, capabilities, and limitations are documented
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
MAP 3.1: AI tasks and the capabilities and limitations of AI systems are documented. Known limitations are communicated to relevant AI actors.
How to fix: Add capabilities and limitations fields to your technical documentation or system context document.
Function MAP 5.1
Section titled “Function MAP 5.1”ASSERT-NIST-AIRMF-MP-004-01
Section titled “ASSERT-NIST-AIRMF-MP-004-01”Likelihood and impact of AI risks are assessed
| Severity | Applies to |
|---|---|
| MAJOR | actor: provider |
MAP 5.1: Likelihood and impact of AI risks are assessed for each identified risk. Risk levels reflect the combination of likelihood and impact.
How to fix: Ensure each risk entry in your risk assessment includes likelihood, impact, and risk_level fields.
Function MEASURE 1.1
Section titled “Function MEASURE 1.1”ASSERT-NIST-AIRMF-MS-001-01
Section titled “ASSERT-NIST-AIRMF-MS-001-01”Criteria for evaluating AI risk are defined
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
MEASURE 1.1: Approaches and criteria to evaluate the trustworthiness of AI systems are established. The criteria align with risk categories identified during the Map function.
How to fix: Add evaluation_criteria or trustworthiness_criteria to your AI risk assessment or create a separate evaluation framework document.
Function MEASURE 2.2
Section titled “Function MEASURE 2.2”ASSERT-NIST-AIRMF-MS-002-01
Section titled “ASSERT-NIST-AIRMF-MS-002-01”AI system performance is evaluated and results documented
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
MEASURE 2.2: Scientific and technical evaluations of AI system functionality and performance are documented. Evaluation methods are appropriate for the intended context.
How to fix: Add performance_metrics to your model card or technical documentation with evaluation results.
Function MEASURE 2.3
Section titled “Function MEASURE 2.3”ASSERT-NIST-AIRMF-MS-003-01
Section titled “ASSERT-NIST-AIRMF-MS-003-01”Fairness and bias metrics are documented across population groups
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
MEASURE 2.3: AI system performance or improvement is evaluated against benchmarks. Benchmarks include group fairness metrics where relevant.
How to fix: Add per-group performance metrics to your bias assessment. Include group_metrics with results per population subgroup.
Function MEASURE 2.7
Section titled “Function MEASURE 2.7”ASSERT-NIST-AIRMF-MS-004-01
Section titled “ASSERT-NIST-AIRMF-MS-004-01”AI system security and adversarial robustness are evaluated
| Severity | Applies to |
|---|---|
| MAJOR | actor: provider |
MEASURE 2.7: AI system security and resilience — including adversarial robustness — are evaluated and documented. Results are considered in risk treatment decisions.
How to fix: Document security evaluation results in docs/security/ or docs/ai-rmf/ including adversarial testing findings.
Function MEASURE 2.9
Section titled “Function MEASURE 2.9”ASSERT-NIST-AIRMF-MS-005-01
Section titled “ASSERT-NIST-AIRMF-MS-005-01”Risks from third-party AI components and data are documented
| Severity | Applies to |
|---|---|
| MINOR | actor: provider |
MEASURE 2.9: The AI system architecture, including AI components and third-party components, is documented and known. Risks from third-party use are considered.
How to fix: Add third_party_components or supply_chain_risks to your technical documentation.
Function MANAGE 1.1
Section titled “Function MANAGE 1.1”ASSERT-NIST-AIRMF-MG-001-01
Section titled “ASSERT-NIST-AIRMF-MG-001-01”AI risks are prioritized and treatment decisions are documented
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
MANAGE 1.1: A risk treatment plan is established for AI risks. Identified risks are prioritized and treatment decisions — accept, mitigate, transfer, or avoid — are documented.
How to fix: Create docs/ai-rmf/risk-treatment-plan.yaml with: prioritized_risks, treatment_decisions (accept/mitigate/transfer/avoid), owners, target_dates.
Function MANAGE 1.3
Section titled “Function MANAGE 1.3”ASSERT-NIST-AIRMF-MG-002-01
Section titled “ASSERT-NIST-AIRMF-MG-002-01”Residual risks after treatment are identified and accepted
| Severity | Applies to |
|---|---|
| MAJOR | actor: provider |
MANAGE 1.3: Responses to the AI risks deemed highest priority are developed, planned, and documented. Residual risks are identified and documented.
How to fix: Add residual_risks field to your risk treatment plan listing risks that remain after mitigation.
Function MANAGE 2.2
Section titled “Function MANAGE 2.2”ASSERT-NIST-AIRMF-MG-003-01
Section titled “ASSERT-NIST-AIRMF-MG-003-01”AI incident response and escalation procedures are documented
| Severity | Applies to |
|---|---|
| CRITICAL | actor: provider |
MANAGE 2.2: Mechanisms are established to inventory AI systems and report failures, unexpected outcomes, and other impacts that may affect the organization or individuals.
How to fix: Create docs/ai-rmf/incident-response.yaml or config/incident_response.yaml with: reporting_process, escalation_path, response_timeline.
Function MANAGE 2.4
Section titled “Function MANAGE 2.4”ASSERT-NIST-AIRMF-MG-004-01
Section titled “ASSERT-NIST-AIRMF-MG-004-01”Deployed AI system is monitored for performance and risk
| Severity | Applies to |
|---|---|
| MAJOR | actor: provider |
MANAGE 2.4: Deployed AI systems are monitored for performance and outcomes. Feedback is used to improve risk management processes.
How to fix: Document your production monitoring plan with: monitored_metrics, alert_thresholds, review_frequency, feedback_loop.