ComplianceForge AI
How does it work?Risk categoriesPricingFAQKnowledge Base417Blog
Login
Knowledge BaseGuidelinesGuidelines on High-Risk AI Systems
GUIDELINE

Guidelines on High-Risk AI Systems

Official European Commission guidelines for implementing obligations related to high-risk AI systems under the EU AI Act (Regulation 2024/1689), including classification, requirements, and practical examples.

Source document

What is this document?

The Guidelines on High-Risk AI Systems are an official European Commission document that clarifies the practical application of Article 6 of the EU AI Act (Regulation 2024/1689). The Commission was required to publish these guidelines by 2 February 2026, and they include a comprehensive list of practical use case examples of AI systems that are and are not high-risk.

The document is intended for providers, deployers, and all other actors in the AI system value chain who need to understand whether their systems fall into the high-risk category.

Key points

High-risk classification

The EU AI Act defines two main categories of high-risk AI systems:

  1. AI systems embedded in regulated products (Annex I) — AI systems that are a safety component of products covered by existing EU harmonisation legislation (e.g. medical devices, machinery, toys, vehicles). Rules for these systems apply from 2 August 2027.

  2. Standalone high-risk AI systems (Annex III) — AI systems in eight areas of application: biometrics, critical infrastructure, education, employment, access to public services, law enforcement, migration management, and administration of justice. Rules for these systems apply no later than 2 December 2027.

Key requirements for high-risk systems

Articles 8-15 of the AI Act prescribe mandatory requirements:

  • Risk management system (Art. 9) — continuous identification and mitigation of risks
  • Data governance (Art. 10) — data quality for training, validation, and testing
  • Technical documentation (Art. 11) — detailed documentation prior to placing on the market
  • Record-keeping (Art. 12) — automatic logging of system operations
  • Transparency (Art. 13) — instructions for use for deployers
  • Human oversight (Art. 14) — capability for human intervention
  • Accuracy, robustness, and cybersecurity (Art. 15)

Exemption from high-risk classification

Article 6(3) provides that an AI system listed in Annex III shall not be considered high-risk if it does not pose a significant risk to the health, safety, or fundamental rights of natural persons, including situations where the AI system does not materially influence the outcome of decision-making.

How does this apply to organisations?

For providers

Organisations developing high-risk AI systems must:

  • Establish a quality management system (Art. 17)
  • Conduct a conformity assessment before placing on the market (Art. 43)
  • Register the system in the EU database (Art. 49)
  • Establish a post-market monitoring system (Art. 72)
  • Report serious incidents to competent authorities (Art. 73)

For deployers

Organisations using high-risk AI systems must:

  • Ensure use in accordance with the instructions (Art. 26)
  • Carry out human oversight of the system
  • Ensure that input data is relevant and representative
  • Conduct a fundamental rights impact assessment (Art. 27) — for public bodies and private entities providing public services

Practical steps

  1. Identify all AI systems within the organisation
  2. Classify each system according to Annex I and Annex III
  3. Assess whether the exemption under Art. 6(3) applies
  4. Implement the requirements for systems classified as high-risk
  5. Document the entire process for audit purposes

Relevant articles of the EU AI Act

ArticleTopic
Art. 6Classification rules for high-risk AI systems
Art. 8-15Requirements for high-risk systems
Art. 16-17Obligations for providers of high-risk systems
Art. 26-27Obligations for deployers of high-risk systems
Art. 43Conformity assessment
Art. 49Registration in the EU database
Annex IList of EU harmonisation legislation
Annex IIIAreas of application for high-risk AI systems

Source document

The official European Commission guidelines are available at: Supporting the implementation of the AI Act with clear guidelines

Additional information on classification is available at: Article 6 — Classification Rules for High-Risk AI Systems

Need compliance documentation?

Generate AI Inventory, Risk Assessment and other documents automatically — tailored to your system.

Register for freeSee example

Quick compliance check

Find out in 5 min if your AI system is high-risk and what you need to do.

Start questionnaire