ComplianceForge AI
How does it work?Risk categoriesPricingFAQKnowledge Base417Blog
Login
Knowledge BaseGuidelinesGuidelines on Transparency Obligations for AI Systems
GUIDELINE

Guidelines on Transparency Obligations for AI Systems

European Commission guidelines on transparency obligations for providers and deployers of AI systems under Article 50 of the EU AI Act, including labelling of AI-generated content and the Code of Practice.

Source document

What is this document?

The Transparency Obligations Guidelines are a set of documents by the European Commission that clarify the application of Article 50 of the EU AI Act. The Commission is developing two parallel instruments:

  1. Non-binding guidelines — covering Article 50 in its entirety and clarifying key concepts
  2. Code of Practice on labelling AI-generated content — covering specifically the obligations under Art. 50(2) and (4) on technical labelling

The European Commission published the first draft of the Code of Practice on labelling on 17 December 2025. A further draft is expected in March 2026, with the final version in June 2026, ahead of the entry into application of Article 50 in November 2026 (extended from August 2026 under Omnibus VII, March 2026).

Key points

Transparency obligations under Article 50

Article 50 prescribes obligations for different categories of AI systems:

For AI systems that directly interact with persons (Art. 50(1))

Providers must ensure that the AI system is designed so that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances.

For emotion recognition and biometric categorisation systems (Art. 50(3))

Deployers must inform natural persons who are exposed to such systems about their operation.

For AI systems that generate synthetic content (Art. 50(2))

Providers must ensure that the outputs of the AI system (audio, image, video, text) are marked in a machine-readable format and are detectable as artificially generated or manipulated.

For deepfake content (Art. 50(4))

Deployers who use an AI system to generate or manipulate image, audio, or video content that appreciably resembles existing persons, objects, places, or events must disclose that the content has been artificially generated or manipulated.

Technical requirements for labelling

The Code of Practice on labelling specifies:

  • Technical solutions must be effective, interoperable, robust, and reliable
  • Labelling must be in a machine-readable format
  • Solutions should include metadata, watermarks, or other appropriate technical methods
  • The difference between different types of content (text, image, audio, video) must be taken into account

Exceptions

Transparency obligations do not apply to AI systems that perform an auxiliary function of standard editing or that do not substantially alter the input data. Additionally, mitigated obligations apply to artistic and satirical content.

How does this apply to organisations?

Compliance deadline

Transparency obligations under Article 50 become fully applicable on 2 November 2026 — extended from August 2026 under Omnibus VII (March 2026).

For AI system providers

  1. Chatbots and virtual assistants — Implement clear notification to users that they are interacting with an AI system
  2. Generative AI — Ensure technical mechanisms for labelling generated content (watermarks, metadata)
  3. Documentation — Prepare technical documentation on implemented transparency measures

For deployers of AI systems

  1. Deepfake content — Clearly label all AI-generated or manipulated content before publication
  2. Emotion recognition systems — Inform all persons exposed to such systems
  3. Public information — When AI is used to generate text published with the purpose of informing the public, the AI-generated nature of the content must be disclosed

Practical steps

  1. Inventory — Identify all AI systems that generate content or directly interact with users
  2. Technical assessment — Determine which technical labelling measures you need to implement
  3. Follow the Code of Practice — Participate in the development process or monitor the final version (June 2026)
  4. Test solutions — Implement and test technical labelling mechanisms before the deadline
  5. Training — Educate teams on transparency obligations

Relevant articles of the EU AI Act

ArticleTopic
Art. 50Transparency obligations for providers and deployers
Art. 50(1)Notification of interaction with an AI system
Art. 50(2)Labelling of synthetic content
Art. 50(3)Notification for emotion recognition systems
Art. 50(4)Labelling of deepfake content
Art. 96Codes of practice for transparency
Recitals 132-136Explanatory considerations for transparency obligations

Source documents

  • Guidelines and Code of Practice: Guidelines and Code of Practice on transparent AI systems
  • Code of Practice on labelling: Code of Practice on marking and labelling of AI-generated content
  • Article 50: Article 50 — Transparency Obligations

Need compliance documentation?

Generate AI Inventory, Risk Assessment and other documents automatically — tailored to your system.

Register for freeSee example

Quick compliance check

Find out in 5 min if your AI system is high-risk and what you need to do.

Start questionnaire