You own your face: Could Denmark’s AI legal response for deepfakes become a European model?

This image is AI-generated and is intended for illustrative purposes only

This is a blog post written by Bojan Kordalov, Senior Expert in Communications, Public Affairs, and AI Literacy. He has over 20 years of experience in visibility, public relations campaigns, and strategic consultancy. Bojan is currently serving as a Director for policy and communication at the European Centre of Excellence (ECE) in Brussels.

We’ve all seen them - those catchy videos of world leaders dancing together, political rivals singing love songs, or celebrities saying things they’d never say. We watch, we laugh, we share. It’s entertaining, creative, sometimes even clever.

But what if it was you in one of those videos?

What if your face and voice were used (without your consent) to say something you’d never say, in a context you don’t support, or worse, in a way that harms your reputation, safety, or career?

What once seemed like harmless humour suddenly becomes personal, even dangerous. And it’s no longer a joke.

And probably, this is the reason why in 2025, Denmark made global headlines by proposing a pioneering law that protects citizens from the misuse of AI-generated deepfakes. At its heart lies a simple but powerful message: you own your face. For years, we've spoken about the need for ethical digital transformation, AI transparency, and meaningful public communication. Now, Denmark has taken a bold legislative step that puts those principles into practice.

As someone deeply involved in media literacy, AI governance, and strategic communication across Europe, I view this law as more than just a national initiative, but also as a democratic milestone.

A democratic response to a technological dilemma

The Danish law addresses multiple dimensions of AI’s impact on society: it requires clear labelling of synthetic media, ensures that creators and citizens retain copyright over their likeness and voice, and establishes a regulatory body to oversee compliance. Crucially, it prohibits the unauthorised use of AI-generated deepfakes during election periods, thereby protecting not just individuals, but the integrity of democratic institutions.

This balanced and proactive approach reflects what I have long advocated: AI must be governed not by fear, but by democratic values. Regulation should not block innovation, but it must ensure that digital technologies serve people, not manipulate them.

AI transparency must begin with truth

In my decades of work across different countries and regions, I have consistently stressed the importance of media and digital literacy, now complemented by what I consider the most important literacy of all: AI literacy. The Danish law gives that literacy a legal backbone. When deepfakes are misused to distort political messaging or spread disinformation, we need more than fact-checkers. We need legal safeguards and we must remind ourselves that we proudly own our personal identity, including our own face, voice image etc.

The phrase “not anti-AI, but pro-authenticity,” used by Denmark’s Culture Minister, perfectly captures the spirit we need. We must foster trust and facts in the digital space, and that starts by ensuring transparency about what is real and what is synthetic. This matters deeply because one day we might wake up in a world doubting what is a reality and what is illusion. Or asking our self the question: “I am really awake or still dreaming”.

A model for EU candidate countries and the EU itself?

At the European Centre of Excellence (@ECE Brussels), we work to empower institutions and citizens alike to navigate the digital age responsibly. That is why I believe Denmark’s initiative should not remain an isolated case. This model deserves to be thoroughly discussed in Brussels and beyond and particularly as the EU Artificial Intelligence Act enters its implementation phase.

But it’s also relevant for EU candidate countries. In these countries, where domestic national institutions are still building digital trust with citizens, a law like Denmark’s could serve as both a legal and symbolic tool to protect citizens from disinformation and reinforce democratic norms.

Where AI ethics meets policy

The question of whether a person owns their own face, voice, or likeness in the digital age is not philosophical - it’s now a legal policy reality in Denmark. That should prompt all of us to reconsider how we structure digital rights, regulate AI content, and preserve public trust.

Europe needs more than innovation. It needs innovation rooted in ethics, transparency, and inclusion. Denmark has given an example what that can look like. Now let’s build on it, because AI must be governed not by fear, but by democratic values, improving freedoms and the AI ethical principles.

Photo: Bojan Kordalov, European Centre of Excellence (ECE Brussels)

Addendum - Legal and other context:

“You own your face: Could Denmark’s AI legal response for deepfakes become a European model?” is an opinion by Bojan Kordalov, Senior Strategic Communcation and AI Literacy expert currently serving as a Director of Policy and Communication at the European Centre of Excellence (ECE) in Brussels. The opinion refers to a proposed sweeping amendment to Denmark’s Copyright Act, aimed at protecting citizens from AI-generated deepfakes that replicate their likeness, voice, or physical traits without consent. If passed, the law will mark the first of its kind in Europe to treat biometric identity as a form of intellectual property. [5]. The legislation introduces mandatory labelling of synthetic media, strengthens copyright protections for individuals’ voices and likenesses, and establishes new transparency and enforcement mechanisms. Widely considered one of the first national laws to address the intersection of artificial intelligence, identity rights, and misinformation, it positions Denmark as a frontrunner in digital rights governance in the AI era [1].

Background

The legislation was introduced in response to the growing presence of synthetic media in Danish public discourse, including political deepfakes and viral impersonations. A widely circulated deepfake video of a Danish television presenter helped galvanise public support for stricter regulation [2]. Consultations held between 2024 and 2025 involved artists, copyright experts, civil society organisations, and technology firms. The Minister of Culture described the initiative as “not anti-AI, but pro-authenticity” [2].

Key Provisions

The law includes the following components [1, 2, 3, 6]:

  • Identity rights: Individuals are granted copyright-like protection over their own image, voice, and likeness, enabling them to request removal or seek compensation if used without consent, except in cases of parody or satire.

  • Use of copyrighted material: Developers of AI systems must obtain appropriate rights when using copyrighted content for training purposes.

  • Liability for platforms: Online platforms are obligated to act upon reports of unauthorized AI-generated content and may face penalties if they fail to comply.

  • International scope: The law applies to any AI-generated content made available in Denmark, regardless of where it was created.

  • Political safeguards: Synthetic content impersonating public officials is prohibited (especially during election periods) unless consented and labelled [1].

Legislative Process

On June 26, 2025, Denmark’s Minister of Culture, Jakob Engel-Schmidt, introduced the “Draft Proposal for an Act to Amend the Danish Copyright Act” (UDKAST Forslag til Lov om ændring af lov om ophavsret), seeking to curb the misuse of AI in generating deepfake content. The proposed legislation is scheduled to enter into force on March 31, 2026, pending parliamentary approval [5].

The bill introduces legal protections against the unauthorised digital replication of an individual’s face, body, or voice, regardless of their public status. By explicitly framing these attributes as protected expressions, the law diverges from traditional copyright frameworks - which typically safeguard only original creative works - and repositions biometric identity within the realm of intellectual property.

Reactions and Impact

Recent surveys confirm that a large majority of Danish citizens support regulatory action on artificial intelligence, with 71% believing that AI regulation is needed and 86% calling for laws to address AI-generated misinformation. The law has been praised by human rights organisations and cultural groups for its defence of individual identity in the digital age.

Technology companies expressed mixed reactions. While large platforms signalled compliance, startups raised concerns over regulatory burden and potential innovation constraints [3, 6].

Legal analysts described Denmark’s approach as a form of “copyright-first AI governance,” with Panitch Law noting it as a potential global precedent for safeguarding human authorship [6].

Comparative Context

In July 2025, Denmark announced plans to promote the law as a model for other EU Member States. According to Euractiv, Danish officials intend to present the initiative during upcoming EU Council discussions on digital regulation [4].

Sources

  1. The New York Times – Denmark Passes Landmark Law on Deepfake Copyright and AI Transparency

  2. The Guardian – Deepfakes, Democracy, and Copyright

  3. WBN Digital – Denmark Copyrights Faces Against AI Deepfakes

  4. Euractiv – Denmark Wants to Copy-Paste Its Anti-Deepfake Law Across Europe

  5. Kim Avocat – Danish Copyright Law Proposal Summary (Unofficial)

  6. Panitch Law – Harnessing Copyright Law to Tackle Deepfakes

Previous
Previous

Bringing AI into education: Europe’s next digital priority

Next
Next

Protecting children online in the age of AI: Europe needs regulation, education and literacy