KI_kennzeichnungspflicht_2880x1620_STAGE KI_kennzeichnungspflicht_2880x1620_STAGE

from Nicole
12.03.2026

AI labeling obligation in advertising and marketing:
EU AI Act, deepfake regulation and UWG explained in an understandable way

From August 2026, the EU AI Act will impose binding transparency obligations for certain AI-generated or AI-manipulated content. However, what appears to be clearly formulated in the legal text raises numerous detailed questions in application, particularly in the areas of advertising, packaging and brand staging. There is a tension between technical innovation, consumer protection and existing competition law that is currently the subject of debate.

EU AI Act and labeling requirements: When must AI-generated content be labeled as such?

Article 50 EU AI Act regulates the transparency obligation for AI-generated or AI-manipulated content. A labeling obligation exists in particular when it comes to so-called “deepfakes” within the meaning of Art. 3 No. 60 EU AI Act. A deepfake is not just the manipulated voice of a politician or a prominent fake video. Rather, the legal text defines deepfakes as any AI-generated or manipulated image, sound or video content that resembles real persons, objects, places, facilities or events and “would falsely appear to be genuine or truthful”.

This wording alone shows how broad the legal framework is. The decisive factor is not an intention to deceive, but rather the possibility that content could be misinterpreted as genuine. At the same time, the legislator differentiates: if content is obviously artistic, creative, satirical or fictional, the transparency obligation should be limited to an appropriate disclosure that “does not impair the enjoyment of the work”.

What the EU AI Act does not provide, however, are clear, practicable criteria for where this boundary runs in everyday brand communication. When is a motif still design – and when is it already suitable for deception? When is something “obviously” artificial? And what does this mean for advertising, for example, which is often staged per se?

 

EU AI Act implementation and Code of Practice: How is AI labeling currently interpreted?

The interpretation of the law is currently still very much in flux. Two drafts of a “Code of Practice on Transparency of AI-Generated Content” have now been published on the European Commission’s platform. The first draft was followed by a revised second version at the beginning of March 2026, which clarifies some points but does not conclusively clarify all open questions.

In both drafts, transparency is consistently considered from the user’s perspective. AI-generated or AI-manipulated content should be recognizable in particular if it can have a real impact and thus influence trust, public perception or the information context.

The current Code of Practice draft explicitly justifies transparency measures with the protection of “public trust and democratic discourse”. The draft specifies individual aspects of the transparency obligation (e.g. in the case of deepfakes or editorially responsible texts). At the same time, central questions of practical application remain open, such as where the boundary between creative staging and deceptive content lies.

The topic is also viewed from a consumer protection perspective at national level. In its guidelines on AI-generated content, the Wettbewerbszentrale points out that it is not the technical production method of images that is decisive, but the question of whether the consumer can be given a false impression about the authenticity or actual circumstances of the images. This logic also corresponds to the Unfair Competition Act (UWG), which prohibits misleading advertising long before AI. This means that even independently of the labeling obligation under the EU AI Act from August 2026, misleading advertising within the meaning of Section 5 UWG may already be legally relevant.

 

AI images in advertising: when does the transparency obligation under the EU AI Act apply?

It is undisputed that consumers should not be deceived. What is disputed is how far transparency should extend. The regulation on mandatory labeling is linked to the technology, not the visual result. Therefore, two motifs could appear identical but be assessed differently in legal terms – depending on whether they were created using AI, CGI or classic image processing. However, advertising has always worked with design and optimization. The central question is therefore: should transparency be based on the production method or on the actual misleading effect?

In the current industry debate, another perspective is increasingly being advocated: Labelling should be more strongly geared towards the potential damage a piece of content could cause. Instead of the technical creation of an image, the decisive factor would then be whether a motif can influence real decisions, trust or public perception.

 

EU AI Act in practice: risks and options for brands

Trust is a key value, especially in the sensitive environment of food, organic and sustainability. At the same time, product images, so-called “serving suggestions” on product packaging or campaign motifs have never been pure documentation. Many brands are therefore faced with a trade-off: Label with the risk of weakening design statements? Or is it better to avoid using AI altogether? Both are legitimate.

Because one point is often overlooked in the current debate: AI is not a must. It is a tool that accelerates processes and expands possibilities. If labeling is legally required but does not fit the brand strategy, there are still proven alternatives: Photography, studio staging, stock material and classic retouching are still valid production methods. As an agency with its own photo studio, we can offer both methods seamlessly – especially when maximum clarity and legal certainty are required. It is important that the image matches the brand, the product and the context.

 

Using AI in a legally compliant way: Transparency, documentation and compliance in marketing

Agencies are not allowed to provide legal advice. What we can do is inform, classify and make transparent where AI is used in the project. On this basis, our customers can decide how they want to deal with labeling.

The EU AI Act is currently being fleshed out further, including through the ongoing Code of Practice process, future guidelines from the EU AI Office, practical cases and case law. Until then, the situation remains dynamic – and requires a considered approach.

We are staying on top of this issue, monitoring developments and updating our assessment as soon as clarifications are available. It will be crucial to use transparency where it protects trust – without losing impact where design has always been part of communication.

FAQ on the AI labeling obligation: EU AI Act, deepfakes, UWG and marketing practice

The EU AI Act does not provide for a blanket labeling obligation for every use of AI. According to Art. 50 EU AI Act, a disclosure obligation arises above all when AI-generated or AI-manipulated content is considered to be so-called deepfakes – i.e. it appears real and could be misunderstood by consumers as genuine. Exactly how this line is to be drawn in practice is still being discussed.

The EU AI Act defines deepfakes as AI-generated or manipulated image, sound or video content that resembles real persons, objects, places or events and would falsely appear to be genuine or truthful (Art. 3 No. 60 EU AI Act). An intention to deceive is not required.

Yes, basically yes. The EU AI Act does not differentiate between journalistic, artistic or advertising content. The decisive factor is not the type of use, but whether content can be considered a so-called deepfake – in other words, whether it appears real and could be misinterpreted as authentic. At the same time, the law provides for exceptions for obviously creative or artistic works – but without clearly defining them.

According to the currently widely held view: no, provided that it is purely supportive editing and the authenticity of the content is not altered.
However, this issue is currently the subject of intense debate. Individual interpretation papers go further, while industry associations and many lawyers consider the labeling of such standard edits to be disproportionate. The EU Commission’s current draft Code of Practice focuses primarily on deepfakes and content with a potential impact on public trust or the context of information.

The legal situation has not yet been conclusively clarified.
There are:

  • the legal text of the EU AI Act,
  • Draft interpretative guidance (Code of Practice on Transparency of AI-Generated Content) from the European Commission
  • Recommendations of the Wettbewerbszentrale,
  • but no relevant case law yet.

For this reason, there are currently differing legal opinions on the scope of the labeling obligation.

The UWG (Section 5 UWG – misleading information) applies independently of the EU AI Act. AI-generated content can already be unlawful today if it misleads consumers about material circumstances – such as the origin, authenticity or characteristics of a product. The decisive factor here is not the use of AI itself, but the potential misleading effect.

Current assessment: yes, in many cases probably. AI-generated persons are considered to be particularly sensitive, as they can imitate real people or be perceived as such. There is an increased risk here with regard to the EU AI Act as well as personal rights and competition law.

If labeling is required, it must be clear, understandable and close to the context. In the case of image motifs, this usually means: directly on the motif or in the immediate vicinity, not hidden in the imprint.

For text-based content, Art. 50 para. 4 EU AI Act generally does not provide for a labeling obligation, provided that:

  • a human editorial review is carried out,
  • a responsible person or organization that assumes responsibility for the content,
  • and there is no deception about authorship, opinion or experience.

This applies in particular to PR texts, website content or advertising texts with editorial responsibility.

If labeling is legally required but undesirable from a brand perspective, there are still proven alternatives: classic photo productions, studio staging, stock material and manual retouching without AI-generative processes.

AI is a tool – not a mandatory production method. The appropriate production path should always be chosen in the context of the brand, medium and risk appetite.

At present, no relevant fines or court rulings on the AI labeling obligation are known in the European context.
The EU AI Act will come into effect gradually, with the labeling obligation taking effect from August 2026. Many detailed questions will only be fleshed out in future guidelines and case law.

 

Companies should document the use of AI transparently, check for potential misleading risks and obtain a legal assessment in case of doubt. As long as the details of the EU AI Act have not yet been conclusively clarified and the ongoing code of practice process at EU level has not yet been completed, a risk-conscious case-by-case assessment is recommended instead of making blanket assumptions.

We continuously monitor legal developments, inform our clients transparently about risks and options – and document the use of AI in the project. However, the legal decision as to whether and how to label always lies with the client. We do not provide legal advice, but rather a well-founded, practical classification.

Sources:

EU-AI ACT:
https://eur-lex.europa.eu/legal-content/DE/TXT/HTML/?uri=OJ:L_202401689

First Draft Code of Practice on Transparency of AI-Generated Content
https://digital-strategy.ec.europa.eu/en/library/first-draft-code-practice-transparency-ai-generated-content

Second Draft Code of Practice on Transparency of AI-Generated Content
https://digital-strategy.ec.europa.eu/en/library/commission-publishes-second-draft-code-practice-marking-and-labelling-ai-generated-content

Guidelines for labeling AI-generated content from the German Wettbewerbszentrale
https://www.wettbewerbszentrale.de/wp-content/uploads/2026/02/2026_2_Leitfaden_KI_generierte_inhalte_1-1.pdf


More blog articles

Pantonefarbe 2021
+
Pantone color of the year 2021
Duzen oder Siezen
+
You, you, you? How do I tell my customer?
EU Verpackungsverordnung
+
One for all countries – the new EU packaging regulation
Biodiversität hat jetzt eine Farbe
+
Biodiversity now has a color
Greta Grotesk
+
The writing on the movement: Greta Grotesk
EmpCo Richtlinie
+
EmpCo guideline
Eber