KI_kennzeichnungspflicht_2880x1620_STAGE KI_kennzeichnungspflicht_2880x1620_STAGE

from Nicole
12.03.2026

AI labeling obligation in advertising and marketing:
EU AI Act, deepfake regulation and UWG explained in an understandable way

From August 2026, the EU AI Act will impose binding transparency obligations for certain AI-generated or AI-manipulated content. However, what appears to be clearly formulated in the legal text raises numerous detailed questions in application, particularly in the areas of advertising, packaging and brand staging. There is a tension between technical innovation, consumer protection and existing competition law that is currently the subject of debate.

EU AI Act and labeling requirements: When must AI-generated content be labeled as such?

Article 50 EU AI Act regulates the transparency obligation for AI-generated or AI-manipulated content. A labeling obligation exists in particular when it comes to so-called “deepfakes” within the meaning of Art. 3 No. 60 EU AI Act. A deepfake is not just the manipulated voice of a politician or a prominent fake video. Rather, the legal text defines deepfakes as any AI-generated or manipulated image, sound or video content that resembles real persons, objects, places, facilities or events and “would falsely appear to be genuine or truthful”.

This wording alone shows how broad the legal framework is. The decisive factor is not an intention to deceive, but rather the possibility that content could be misinterpreted as genuine. At the same time, the legislator differentiates: if content is obviously artistic, creative, satirical or fictional, the transparency obligation should be limited to an appropriate disclosure that “does not impair the enjoyment of the work”.

What the EU AI Act does not provide, however, are clear, practicable criteria for where this boundary runs in everyday brand communication. When is a motif still design – and when is it already suitable for deception? When is something “obviously” artificial? And what does this mean for advertising, for example, which is often staged per se?

 

EU AI Act implementation and Code of Practice: How is AI labeling currently interpreted?

The interpretation of the law is currently still very much in flux. Two drafts of a “Code of Practice on Transparency of AI-Generated Content” have now been published on the European Commission’s platform. The first draft was followed by a revised second version at the beginning of March 2026, which clarifies some points but does not conclusively clarify all open questions.

In both drafts, transparency is consistently considered from the user’s perspective. AI-generated or AI-manipulated content should be recognizable in particular if it can have a real impact and thus influence trust, public perception or the information context.

The current Code of Practice draft explicitly justifies transparency measures with the protection of “public trust and democratic discourse”. The draft specifies individual aspects of the transparency obligation (e.g. in the case of deepfakes or editorially responsible texts). At the same time, central questions of practical application remain open, such as where the boundary between creative staging and deceptive content lies.

The topic is also viewed from a consumer protection perspective at national level. In its guidelines on AI-generated content, the Wettbewerbszentrale points out that it is not the technical production method of images that is decisive, but the question of whether the consumer can be given a false impression about the authenticity or actual circumstances of the images. This logic also corresponds to the Unfair Competition Act (UWG), which prohibits misleading advertising long before AI. This means that even independently of the labeling obligation under the EU AI Act from August 2026, misleading advertising within the meaning of Section 5 UWG may already be legally relevant.

 

AI images in advertising: when does the transparency obligation under the EU AI Act apply?

It is undisputed that consumers should not be deceived. What is disputed is how far transparency should extend. The regulation on mandatory labeling is linked to the technology, not the visual result. Therefore, two motifs could appear identical but be assessed differently in legal terms – depending on whether they were created using AI, CGI or classic image processing. However, advertising has always worked with design and optimization. The central question is therefore: should transparency be based on the production method or on the actual misleading effect?

In the current industry debate, another perspective is increasingly being advocated: Labelling should be more strongly geared towards the potential damage a piece of content could cause. Instead of the technical creation of an image, the decisive factor would then be whether a motif can influence real decisions, trust or public perception.

 

EU AI Act in practice: risks and options for brands

Trust is a key value, especially in the sensitive environment of food, organic and sustainability. At the same time, product images, so-called “serving suggestions” on product packaging or campaign motifs have never been pure documentation. Many brands are therefore faced with a trade-off: Label with the risk of weakening design statements? Or is it better to avoid using AI altogether? Both are legitimate.

Because one point is often overlooked in the current debate: AI is not a must. It is a tool that accelerates processes and expands possibilities. If labeling is legally required but does not fit the brand strategy, there are still proven alternatives: Photography, studio staging, stock material and classic retouching are still valid production methods. As an agency with its own photo studio, we can offer both methods seamlessly – especially when maximum clarity and legal certainty are required. It is important that the image matches the brand, the product and the context.

 

Using AI in a legally compliant way: Transparency, documentation and compliance in marketing

Agencies are not allowed to provide legal advice. What we can do is inform, classify and make transparent where AI is used in the project. On this basis, our customers can decide how they want to deal with labeling.

The EU AI Act is currently being fleshed out further, including through the ongoing Code of Practice process, future guidelines from the EU AI Office, practical cases and case law. Until then, the situation remains dynamic – and requires a considered approach.

We are staying on top of this issue, monitoring developments and updating our assessment as soon as clarifications are available. It will be crucial to use transparency where it protects trust – without losing impact where design has always been part of communication.

Sources:

EU-AI ACT:
https://eur-lex.europa.eu/legal-content/DE/TXT/HTML/?uri=OJ:L_202401689

First Draft Code of Practice on Transparency of AI-Generated Content
https://digital-strategy.ec.europa.eu/en/library/first-draft-code-practice-transparency-ai-generated-content

Second Draft Code of Practice on Transparency of AI-Generated Content
https://digital-strategy.ec.europa.eu/en/library/commission-publishes-second-draft-code-practice-marking-and-labelling-ai-generated-content

Guidelines for labeling AI-generated content from the German Wettbewerbszentrale
https://www.wettbewerbszentrale.de/wp-content/uploads/2026/02/2026_2_Leitfaden_KI_generierte_inhalte_1-1.pdf


More blog articles

Virtuelle Influencer
+
Virtual influencers
Beyond Burger Test - wie schmeckt der Hype 2
+
Product test: Beyond Meat Burger
Progressive Web Apps
+
Progressive web apps: an app without an app
Verpackungsgesetz 2019
+
7 facts about the new packaging law
Foodtrends 2019
+
Food trends 2019
Duzen oder Siezen
+
You, you, you? How do I tell my customer?
Eber