Everything you need to know about EU artificial intelligence regulation
AI systems posing unacceptable risk that are banned in the EU, including social scoring, manipulative techniques, and biometric categorization.
1 articleAI systems requiring strict compliance obligations, including risk management, data quality, transparency, and human oversight.
22 articlesAI systems with transparency obligations — users must know they are interacting with AI, including chatbots, deepfakes, and generative AI.
3 articlesGeneral-purpose models like large language models (LLMs) with specific transparency and systemic risk assessment obligations.
5 articlesInstitutional framework for enforcing the EU AI Act, including the AI Office, national authorities, sandboxes, and penalties.
36 articlesOverview of amendments to the EU AI Act adopted through the Omnibus VII package in March 2026 — new deadlines, simplification for small businesses, and changes to prohibited practices.
Read more →Omnibus VII adds a new prohibition under Art. 5(1)(i) of the EU AI Act — AI systems that generate sexually explicit images/videos without consent (nudifiers, deepfake pornography).
Read more →Detailed analysis of Article 4 of the EU AI Act on AI literacy — who must have a programme, what it means in practice, best practices for implementation, and programme examples.
Read more →Official European Commission guidelines for implementing obligations related to high-risk AI systems under the EU AI Act (Regulation 2024/1689), including classification, requirements, and practical examples.
Read more →European Commission guidelines on transparency obligations for providers and deployers of AI systems under Article 50 of the EU AI Act, including labelling of AI-generated content and the Code of Practice.
Read more →Answers to the most frequently asked questions about the EU AI Act provided by the European AI Office, including application deadlines, obligations for different actors, and the compliance process.
Read more →The Code of Practice for General-Purpose AI models is a voluntary instrument that helps GPAI model providers demonstrate compliance with their obligations under the EU AI Act.
Read more →European Commission guidelines on prohibited artificial intelligence practices under Article 5 of the EU AI Act, with legal interpretations and practical examples.
Read more →Overview of harmonised standards being developed by CEN and CENELEC to support the implementation of the EU AI Act, including quality management systems, risk management, and technical requirements.
Read more →ISO/IEC 42001:2023 is the first international standard for artificial intelligence management systems (AIMS), helping organisations establish, implement, and continuously improve a responsible approach to AI systems.
Read more →ISO/IEC 23894:2023 provides guidance on managing risks specific to artificial intelligence throughout the entire AI system lifecycle, building upon the ISO 31000 framework.
Read more →The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary framework from the U.S. National Institute of Standards and Technology for managing AI system risks, built on four functions: Govern, Map, Measure, and Manage.
Read more →ISO/IEC 22989:2022 is a foundational standard for artificial intelligence that defines over 110 key concepts and terms, ensuring a common language for all stakeholders in the AI ecosystem.
Read more →