Skip to content

EU AI Act

The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI law. It applies primarily to providers of high-risk AI systems as defined in Annex III. Rulestatus encodes the documentation and evidence obligations from Articles 6, 9, 10, 11, 13, 14, and 15 as executable checks.

All checks apply to actor: provider, riskLevel: high-risk unless otherwise noted.

Total assertions44
Critical15
Major19
Minor9

System is classified under a defined risk level

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 6 — Classification rules for high-risk AI systems

How to fix: Add a risk_level field in .rulestatus.yaml under system:. Valid values: prohibited, high-risk, limited-risk, minimal-risk.


High-risk system identifies its Annex III category

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 6(2) and Annex III — High-risk AI systems referred to in Article 6(2)

How to fix: Add an ‘annex_iii_category’ field to your system config. Valid categories: biometric, critical-infrastructure, education, employment, essential-services, law-enforcement, migration, justice.


Prohibited use cases documented as not applicable

SeverityApplies to
INFOactor: provider, riskLevel: high-risk

Article 5 — Prohibited AI practices

How to fix: Create a prohibited-uses document in docs/compliance/ that explicitly states which Article 5 practices are not applicable.


Risk management system documentation exists

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 9(1): “A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.”

How to fix: Create a risk management document in docs/risk-management/ or compliance/. Must include: system_name, identified_risks, mitigation_measures, review_date.


Risk management document covers intended use cases

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 9(1): risk management must be maintained in relation to the AI system’s intended purpose.

How to fix: Add an intended_use or use_cases field to your risk management document.


Risk register covers health, safety, and fundamental rights dimensions

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 9(2)(a): “identification and analysis of the known and the reasonably foreseeable risks to health, safety or the fundamental rights…”

How to fix: Add a dimension field to each risk entry. Required values: health, safety, fundamental_rights.


Risk register includes emerging risks and foreseeable misuse scenarios

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 9(2)(b): “estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.”

How to fix: Add risk entries with source: emerging or category: misuse.


Risk management is described as a continuous iterative process

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 9(1): “The risk management system shall be understood as a continuous iterative process…”

How to fix: Add a review_cycle field (e.g. ‘quarterly’) to your risk management document.


Testing procedures with representative data are documented

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 9(4)(a): “testing of the high-risk AI system… shall be performed on the basis of data relevant for the intended purpose.”

How to fix: Create a test plan document with representative_data field.


Residual risks are identified and documented

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 9(2)(c): “evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system.”

How to fix: Add residual risk entries with status: residual or a residual_risk field.


Serious incident reporting procedure is documented

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 9 + Article 73: providers must establish procedures for reporting serious incidents.

How to fix: Create docs/incident-response.md or config/incident_response.yaml.


Training data documentation exists

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 10(1): “High-risk AI systems… shall be developed on the basis of training, validation and testing data sets that meet the quality criteria…”

How to fix: Create a data governance document in docs/compliance/ or docs/data-governance/, or include training data details in your model card.


Bias examination is documented for training data

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 10(2)(f): “examination in view of possible biases, that are likely to affect health and safety or lead to the violation of fundamental rights.”

How to fix: Create docs/bias_assessment.yaml or docs/compliance/bias-examination.md.


Bias assessment covers at least 3 protected characteristics

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 10(2)(f): bias examination must cover characteristics that could affect fundamental rights.

How to fix: Include at least 3 of: gender, race, age, disability, nationality, religion, ethnicity in characteristics_evaluated.


Data relevance and representativeness are justified

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 10(3): “Training, validation and testing data sets shall be relevant, sufficiently representative…”

How to fix: Add data_sources and representativeness fields to your data governance document or model card.


Special category data handling is documented if applicable

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 10(5): “…the providers of such systems may process special categories of personal data…”

How to fix: If your system processes special category data, add special_category_legal_basis field.


Data minimisation principle is documented

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 10(3): data sets must have appropriate statistical properties.

How to fix: Add a data_minimisation field to your data governance document.


Data quality criteria are defined

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 10(3): training, validation and testing data sets must meet quality criteria for their intended purpose.

How to fix: Add a data_quality_criteria field to your data governance document.


Technical documentation exists with system description

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 11(1): “The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market…”

How to fix: Create technical documentation in docs/technical/ or docs/compliance/. Must include system_name and general_description.


Technical documentation covers required Annex IV sections

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 11 and Annex IV — Technical documentation required for high-risk AI systems.

How to fix: Ensure your technical documentation includes at least 10 of the 15 Annex IV sections.


Technical documentation includes model architecture description

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Annex IV, point 1(c): technical documentation must include the design specifications, including general logic and algorithms.

How to fix: Add model_architecture to your technical documentation or model_type to your model card.


Performance metrics are documented

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Annex IV, point 2(f): technical documentation must include performance metrics.

How to fix: Add performance_metrics to your technical documentation or metrics to your model card.


Technical documentation is versioned and dated

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 11(2): technical documentation must be kept up to date.

How to fix: Add version and date fields to your technical documentation.


Technical documentation references relevant standards

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Annex IV, point 4: technical documentation must list the relevant standards applied.

How to fix: Add a standards field listing applicable standards (e.g. ISO 42001, IEC 62443).


AI system discloses that output is AI-generated

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 13(1): “High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent…”

How to fix: Enable AI disclosure in config/transparency.yaml with ai_disclosure.enabled: true.


Transparency configuration documents AI disclosure and provider contact

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 13(1): operation must be sufficiently transparent; Article 13(3)(a): instructions must include provider name and address.

How to fix: Create config/transparency.yaml with ai_disclosure (enabled: true, mechanism) and provider_contact fields.


Instructions for use documentation exists

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 13(2): “High-risk AI systems shall be accompanied by instructions for use in an appropriate digital or other format.”

How to fix: Create docs/compliance/instructions-for-use.md.


Instructions for use include intended purpose and known limitations

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 13(3)(b): instructions must specify the intended purpose and known limitations.

How to fix: Add intended_purpose and known_limitations fields to your instructions-for-use document.


Instructions for use include performance characteristics

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 13(3)(b)(iv): instructions must specify the level of accuracy, robustness and cybersecurity.

How to fix: Add performance_metrics or accuracy field to your instructions-for-use document.


System capabilities and limitations are disclosed to deployers

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 13(3)(b): instructions must include the capabilities and limitations of the system.

How to fix: Add limitations or capabilities field to instructions for use or technical documentation.


Provider contact information is documented

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 13(3)(a): instructions must include the name and registered address of the provider.

How to fix: Add provider_contact to instructions for use or system config.


Human override mechanism exists or is documented

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 14(1): “High-risk AI systems shall be designed and developed in such a way… that they can be effectively overseen by natural persons…”

How to fix: Create config/human_oversight.yaml with override.enabled: true, document the mechanism, or attest manually with rulestatus attest ASSERT-EU-AI-ACT-014-001-01.


Human oversight measures are specified in technical documentation

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 14(2): oversight measures must be built into the system or identified for implementation by the deployer.

How to fix: Add a human_oversight_measures field to your technical documentation.


Explainability output is available to oversight personnel

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 14(4)(c): oversight persons must be able to interpret the AI system’s output.

How to fix: Add enabled: true to config/explainability.yaml, document explainability in technical docs, or attest manually with rulestatus attest ASSERT-EU-AI-ACT-014-003-01.


System can be paused or stopped by human operator

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 14(4)(e): oversight persons must be able to decide not to use the AI system or override its output.

How to fix: Add pause_capability: true to config/human_oversight.yaml.


Human oversight training materials exist

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 14(3): oversight persons must have the necessary competence, training and authority.

How to fix: Create docs/training/ or docs/oversight/ documentation for oversight personnel.


Audit logs are accessible to oversight personnel

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 12: high-risk AI systems shall have logging capabilities for monitoring purposes.

How to fix: Add audit_log.enabled: true to config/logging.yaml.


Accuracy benchmarks are documented

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 15(1): “High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity…”

How to fix: Add metrics to your model card or performance_metrics to your technical documentation.


Robustness testing is documented, including adversarial inputs

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 15(3): AI systems shall be resilient against attempts to alter their use or performance by third parties exploiting vulnerabilities.

How to fix: Create a robustness testing document in docs/ or add robustness test results to your test results file.


Cybersecurity measures are documented

SeverityApplies to
CRITICALactor: provider, riskLevel: high-risk

Article 15(4): “High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use, outputs or performance by exploiting the system’s vulnerabilities.”

How to fix: Create a security document in docs/security/ or a security config in config/security.yaml.


Access control policy is documented

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 15(4): systems must include technical measures to protect against unauthorised access.

How to fix: Add an access_control field to your security config or security documentation.


Fallback plan for technical failures is documented

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 15(3): AI systems must include fallback plans and fail-safe measures for when errors occur.

How to fix: Create a fallback-plan document in docs/ or add a fallback_plan field to your technical documentation.


Accuracy is measured per relevant population groups (fairness metrics)

SeverityApplies to
MAJORactor: provider, riskLevel: high-risk

Article 15(1): accuracy levels must be declared in the instructions of use and the system must address relevant disaggregated metrics.

How to fix: Add per-group performance metrics to your bias assessment. Each group should have its own accuracy/performance entry.


Third-party security assessment is conducted or planned

SeverityApplies to
MINORactor: provider, riskLevel: high-risk

Article 15 and recital 74: providers are encouraged to carry out security assessments.

How to fix: Add a security_assessment field to your security documentation or create a security_audit structured file.


The following articles are intentionally excluded from automated checking:

Article 7 — Amendments to Annex III — Article 7 delegates Annex III updates to European Commission implementing acts not yet published. Assertions will be added once those acts are finalised.

Article 8 — Compliance with requirements — Article 8 is a general obligation to comply with Articles 9–15. It has no standalone evidence requirements; compliance is demonstrated by passing the Article 9–15 assertions.

Article 12 — Record-keeping — Article 12 requires automatic logging by the AI system at runtime. Rulestatus checks documentation and configuration artifacts, not runtime infrastructure. This obligation must be verified separately in your deployment environment.