ComplianceForge AI
How does it work?Risk categoriesPricingFAQKnowledge Base417Blog
Login
EU AI Act

EU AI Act — Articles

Complete overview of the EU AI Act by topic and risk level.

All topicsProhibited AI PracticesHigh-Risk AI SystemsLimited-Risk AI SystemsGeneral-Purpose AI (GPAI)Governance & Enforcement
unacceptable

Prohibited AI Practices

AI systems posing unacceptable risk that are banned in the EU, including social scoring, manipulative techniques, and biometric categorization.

  • Art. 5Article 5 — Prohibited AI practices
high

High-Risk AI Systems

AI systems requiring strict compliance obligations, including risk management, data quality, transparency, and human oversight.

  • Art. 10Article 10 — Data and data governance
  • Art. 11Article 11 — Technical documentation
  • Art. 12Article 12 — Record-keeping
  • Art. 13Article 13 — Transparency and provision of information to deployers
  • Art. 14Article 14 — Human oversight
  • Art. 15Article 15 — Accuracy, robustness and cybersecurity
  • Art. 16Article 16 — Obligations of providers of high-risk AI systems
  • Art. 17Article 17 — Quality management system
  • Art. 18Article 18 — Documentation keeping
  • Art. 19Article 19 — Automatically generated logs
  • Art. 20Article 20 — Corrective actions and duty of information
  • Art. 21Article 21 — Cooperation with competent authorities
  • Art. 22Article 22 — Authorised representatives of providers of high-risk AI systems
  • Art. 23Article 23 — Obligations of importers
  • Art. 24Article 24 — Obligations of distributors
  • Art. 25Article 25 — Responsibilities along the AI value chain
  • Art. 26Article 26 — Obligations of deployers of high-risk AI systems
  • Art. 27Article 27 — Fundamental rights impact assessment for high-risk AI systems
  • Art. 6Article 6 — Classification rules for high-risk AI systems
  • Art. 7Article 7 — Amendments to Annex III
  • Art. 8Article 8 — Compliance with the requirements
  • Art. 9Article 9 — Risk management system
limited

Limited-Risk AI Systems

AI systems with transparency obligations — users must know they are interacting with AI, including chatbots, deepfakes, and generative AI.

  • Art. 50Article 50 — Transparency obligations for providers and deployers of certain AI systems
gpai

General-Purpose AI (GPAI)

General-purpose models like large language models (LLMs) with specific transparency and systemic risk assessment obligations.

governance

Governance & Enforcement

Institutional framework for enforcing the EU AI Act, including the AI Office, national authorities, sandboxes, and penalties.

Art. 51Article 51 — Classification of general-purpose AI models as general-purpose AI models with systemic risk
Art. 52Article 52 — Procedure
Art. 51Article 51 — Classification of general-purpose AI models as general-purpose AI models with systemic risk
Art. 52Article 52 — Procedure
Art. 53Article 53 — Obligations for providers of general-purpose AI models
Art. 54Article 54 — Authorised representatives of providers of general-purpose AI models
Art. 55Article 55 — Obligations of providers of general-purpose AI models with systemic risk
Art. 64Article 64 — AI Office
Art. 65Article 65 — Establishment and structure of the European Artificial Intelligence Board
Art. 66Article 66 — Tasks of the Board
Art. 67Article 67 — Advisory forum
Art. 68Article 68 — Scientific panel of independent experts
Art. 69Article 69 — Access to the pool of experts by the Member States
Art. 70Article 70 — Designation of national competent authorities and single points of contact
Art. 71Article 71 — EU database for high-risk AI systems listed in Annex III
Art. 72Article 72 — Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems
Art. 73Article 73 — Reporting of serious incidents
Art. 74Article 74 — Market surveillance and control of AI systems in the Union market
Art. 75Article 75 — Mutual assistance, market surveillance and control of general-purpose AI systems
Art. 76Article 76 — Supervision of testing in real world conditions by market surveillance authorities
Art. 77Article 77 — Powers of authorities protecting fundamental rights
Art. 78Article 78 — Confidentiality
Art. 79Article 79 — Procedure at national level for dealing with AI systems presenting a risk
Art. 80Article 80 — Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III
Art. 81Article 81 — Union safeguard procedure
Art. 82Article 82 — Compliant AI systems which present a risk
Art. 83Article 83 — Formal non-compliance
Art. 84Article 84 — Union AI testing support structures
Art. 85Article 85 — Right to lodge a complaint with a market surveillance authority
Art. 86Article 86 — Right to explanation of individual decision-making
Art. 87Article 87 — Reporting of infringements and protection of reporting persons
Art. 88Article 88 — Enforcement of the obligations of providers of general-purpose AI models
Art. 89Article 89 — Monitoring actions
Art. 90Article 90 — Alerts of systemic risks by the scientific panel
Art. 91Article 91 — Power to request documentation and information
Art. 92Article 92 — Power to conduct evaluations
Art. 93Article 93 — Power to request measures
Art. 94Article 94 — Procedural rights of economic operators of the general-purpose AI model
Art. 95Article 95 — Codes of conduct for voluntary application of specific requirements
Art. 96Article 96 — Guidelines from the Commission on the implementation of this Regulation
Art. 97Article 97 — Exercise of the delegation
Art. 98Article 98 — Committee procedure
Art. 99Article 99 — Penalties
Knowledge Base