All articles

EU AI Act: What Changed After Omnibus VII (March 2026)

The European Parliament passed the Omnibus VII package — delayed high-risk AI deadlines, a new nudifier ban, and what it means for your business.

5 min read
ComplianceForge AI

On March 26, 2026, the European Parliament voted on the Omnibus VII package — the biggest changes to the EU AI Act since its adoption in June 2024. If you use AI in your business, you need to know about this.

The changes include delayed deadlines for high-risk AI systems, a new ban on nudifier AI tools, and clarifications on how the regulation applies to SMEs. In this article, we break down what specifically changed, why, and — most importantly — what it means for you.

Old vs New Deadlines — Overview

CategoryOld DeadlineNew DeadlineChange
Prohibited practices (Art. 5)February 2025UnchangedALREADY ACTIVE
AI literacy (Art. 4)February 2025UnchangedALREADY ACTIVE
GPAI modelsAugust 2025UnchangedALREADY ACTIVE
Transparency (Art. 50)August 2026November 2026+3 months
High-risk standalone (Annex III)August 2026December 2027+16 months
High-risk embedded (Annex I)August 2026August 2028+24 months

Source: European Parliament — press release

Why the Delay?

Short answer: harmonized technical standards weren't ready on time.

CEN and CENELEC — Europe's standardization bodies — were supposed to prepare harmonized standards for high-risk AI systems by August 2025. That didn't happen. Without these standards, companies don't know exactly how to prove compliance, and regulators don't know what to check.

The European Commission proposed a "stop-the-clock" mechanism in February 2026 — delaying deadlines until standards are ready. The EU Council supported the approach, and Parliament voted on the final version on March 26.

The process wasn't without controversy. 127 civil society organizations warned that the delay was the result of Big Tech lobbying. As one Reddit user on r/BuyEU summarized: "127 civil society groups say Big Tech lobbied for exactly this."

On the other hand, industry argued it's impossible to meet obligations without clear technical standards — an argument with legitimate basis. The truth is likely somewhere in between: standards are indeed delayed, but that doesn't mean companies can't start preparing.

New Ban: Nudifier AI Tools

Omnibus VII introduces a new prohibition under Article 5 — AI systems for generating non-consensual sexual content. This includes:

  • Nudifier tools — AI that "undresses" people in photographs
  • Deepfake pornography — synthetic sexual content using real people's likenesses
  • CSAM material — any AI-generated child sexual abuse material

The only exception is systems with demonstrably effective safety measures that prevent misuse (e.g., forensic tools for law enforcement).

Penalties for violations: up to €35 million or 7% of global annual turnover — the same level as other prohibited practices.

As the EU Parliament stated: this is a direct response to the wave of deepfake scandals that hit European schools and public figures in 2025.

3 Categories That Are ALREADY ACTIVE

This is the part most companies miss. By focusing on the delayed deadlines for high-risk systems, many overlook that three categories of obligations are already in force:

Prohibited Practices (Article 5) — Since February 2025

Employee social scoring, manipulative AI exploiting vulnerabilities, mass scraping of biometric data from the internet — all of this is already prohibited. Penalties: up to €35M or 7% of global turnover.

Most companies don't do this intentionally, but check whether your AI tools use practices that could fall into this category. For example, AI-based "emotion recognition" in the workplace is banned.

AI Literacy Obligation (Article 4) — Since February 2025

Every company using AI systems must ensure its employees have a sufficient level of AI literacy. This doesn't mean a PhD in machine learning — it means that people using AI tools understand their capabilities, limitations, and risks.

Penalties for non-compliance: up to €15 million or 3% of global turnover. According to a report from r/FuturePrep, only 8 out of 27 Member States have operational contact points for enforcement — but that doesn't mean you can ignore the obligation.

GPAI Models — Since August 2025

Providers of general-purpose AI models (GPT, Claude, Gemini, Llama) must meet transparency obligations, including documentation about training data and copyright policies. This primarily applies to providers, not deployers, but it indirectly affects you too — check whether your AI providers have public compliance documentation.

What Does This Mean for Your Business?

Using AI in daily operations?

Check immediately whether you're compliant with Article 4 (AI literacy) and Article 5 (prohibited practices). These obligations are already active and penalties are significant. You don't need complex documentation — you need to ensure your employees understand the AI tools they use.

Have high-risk AI systems?

You've got more time — but don't delay preparation. December 2027 sounds far away, but risk management systems, technical documentation, and human oversight require months of preparation. As LegalNodes notes: "The extended timeline should be seen as an opportunity to build robust compliance frameworks, not a reason to defer action."

Generating AI content (text, images, video)?

Your deadline is November 2026 — just 8 months from now. Transparency obligations under Article 50 require labeling AI-generated content in machine-readable format. Start implementing watermarking and content labeling now.

Next Steps

The delayed deadlines for high-risk systems don't mean you can delay preparation. Three categories of obligations are already active, the transparency deadline is 8 months away, and the standards will eventually be ready — and then you'll need to be compliant.

Check which obligations apply to your business:


Sources: European Parliament, EU Council, LegalNodes, Kennedys Law, Reuters

Want to know your compliance status?

Free questionnaire, AI classification and compliance score in 30 minutes.