In a move without precedent (albeit not wholly unforeseen), for the first time in Europe, the Danish government announced on 26 June 2025 amendments to the Danish Copyright Act 2014 (“the Act”) that would classify a person's face, body, and voice as works protected under copyright law.
With the Ministry of Culture having secured a bipartisan agreement to amend the country’s current copyright laws, the new bill was submitted for consultation on 7 July 2025 and is currently undergoing public consultation throughout the summer of 2025, allowing stakeholders, legal experts, and civil society to provide their opinions. A parliamentary vote is scheduled for the autumn, and the legislation is expected to be enacted by the end of 2025.
As is now prominent in the modern tech age, the growing capabilities of generative AI and deepfake tools have made it increasingly easy to digitally impersonate individuals without their consent. By virtue of Denmark’s amendments to the Act, Denmark’s legislative response seeks to protect against such misuse by granting individuals exclusive rights to their own appearance and voice, as if these were creative works.
This initiative comes after the worrying growth of the use of deepfake fraud between 2022 and 2023. It has been reported by Deep Media that over 500,000 video and audio deepfakes were shared on social media in 2023 alone.
Deepfakes, a portmanteau of “deep learning” and “fake,” are hyper-realistic digital manipulations of audio and video content, often indistinguishable from real footage. They are created using advanced AI algorithms, based on machine learning and neural networks learning to replicate individuals’ appearances and voices being given to mimic. In the entertainment sectors, they can be used to create realistic effects or deceased actors. However, they can also be misused for bribery, fraud, harassment, political content manipulation, or artists’ identities and works appropriation, and can cause very serious psychological, financial, and reputational injuries. A notable illustration of this is the proliferation of “deepfake” interviews of the former England men’s football manager, Gareth Southgate, over the course of the recent Euro 2024 football tournament.
The new legislation aims to give individuals control over their identity in digital environments, combining intellectual property and personality rights and allowing individuals to take legal action against unauthorised AI-generated representations of themselves or their work, which was long been called for by creative industries.
The Act, at present, grants the ownership of a copyright to the one who produced or is entrusted with the copyright. The new bill introduces provisions in the Act prohibiting the sharing (without consent) of realistic digital reproductions of personal characteristics such as your face, body, and voice.
The new bill also extends the protection for artists who were already protected under section 65 of the Act, which presently prevents literary or artistic work performances from being recorded or made available to the public without consent for 50 years after they took place. The extended protection would include performances other than those of literary or artistic works and would implement measures against realistic, digitally generated imitations of performances being recorded and shared without consent for 50 years after the performing artist’s death.
The new legislation provides individuals and performing artists with a legal basis to be able to request that online platforms remove deepfake content depicting their image or work that was shared without consent, allowing them to consequently seek damages and compensation under the general rules of Danish Law. Tech platforms would also be obliged to remove illegal content they are notified of, or they could be liable to pay severe fines issued by the DSA Supervisory Authority and the EU Commission, as provided by the European Union’s Digital Services Act 2022.
To protect freedom of expression, and under section 24(b) of the Act, satire, parody, and caricature would still be permitted. However, these exceptions would not be automatic, as courts would conduct contextual judicial analysis to ensure exceptions are not misused.
Traditional IP laws in the UK are ill-equipped to handle the misuse of deepfakes. Indeed, in English law, there are no specific image rights, which is the right of an individual to control the use of their identity, including their likeness, personality, or any other distinctive traits.
The copyright protection attributed to artistic works is only available to the one owning the copyright, which, for example, in the case of the photograph of an individual, would be the photographer, and not the individual photographed.
Therefore, individuals in the UK must rely on other legal options to help protect these rights indirectly.
For example, for the commercial use of their image, individuals can rely on contractual agreements (often used by professional models, for example) that can include the [payment of royalties for an agreed period of time.
To protect themselves against the non-consensual use of their image, individuals can rely on data protection law, defamation, or most commonly the tort of passing off. The tort of passing off protects the goodwill associated with a person’s name or image from being misrepresented or used without consent. The three essential elements of passing off are the existence of goodwill attached to goods and services, the misrepresentation by the defendant leading to confusion, and resultant damage to the individual’s goodwill. However, establishing goodwill involves strict requirements, as one needs to demonstrate that their reputation is distinctive enough in a specific location (which requires evidence of significant marketing, publicity, media references etc) that consumers would likely be confused if someone else uses their image, which makes it difficult for individuals other than celebrities to prove and to rely on passing off to combat the misuse of their identity.
Regarding the criminal aspect of sexually explicit deepfakes, the Online Safety Act 2023 has criminalised the sharing or threatening to share intimate deepfakes, and the UK Data (Use and Access) Bill (which received royal ascent on 19 June 2025) was amended to include the criminalisation of the non-consensual creation of sexually explicit deepfakes. Violators could face an “unlimited fine” and a possible two-year prison sentence under the UK’s Sexual Offences Act for creating sexual deepfakes.
The absence of recognition of image rights makes it extremely difficult for an individual to initiate a civil action against someone creating deepfakes of them. By contrast, the Danish legislation opens up the perspectives of available protections against the misuse of deepfakes. This aligns with a growing recognition that identity, especially in digital contexts, is a form of economic and reputational capital deserving of legal protection.
The Danish legislation is expected to influence EU and international law, particularly as countries grapple with the implications of AI-generated content. It should also influence tech companies to implement policies to tackle the use of deepfakes, as previously done by YouTube – it has put in place policies allowing people to make takedown requests of AI-generated video or audio of their likeness or voice, and is currently working on tools to detect AI-generated faces and voices.
As for the USA, notwithstanding the risk that the law might be jeopardised by the Republican Party’s upcoming “Big Beautiful Bill” (which prevents states from regulating AI for 10 years), the bipartisan support for legislation on AI-generated deepfakes could lead to a federal law to emerge from Congress. Indeed, the current US president, Mr Donald Trump, signed the Take it Down Act in May 2025, banning the non-consensual online publication of AI-generated and real sexually explicit images.
Other jurisdictions worldwide have implemented similar legislation seeking to limit the harm caused by AI-generated content, including the UK (as mentioned previously), South Korea, Australia and France. Notably, the EU AI Act, which entered into force on 1 August 2024, provides for the disclosure of the material that has been artificially created or manipulated and specifies its artificial origin and prohibits manipulative AI generated content that impairs informed decision-making.
However, Denmark’s model offers a stronger and clearer legal mechanism, potentially positioning it as a template for reform in other jurisdictions. Within the EU, Denmark’s law could serve as an outline for integrating likeness rights into the AI Act or forthcoming digital identity regulations.
Denmark's proposed legislation represents a bold new direction in identity protection law, one that treats personal likenesses with the same seriousness as copyrighted works. It provides individuals with clear, enforceable rights to protect against digital impersonation and AI misuse.
By anchoring these rights in the domain of intellectual property, Denmark not only fills a current legal gap but also offers a framework that could be adapted internationally. If successful, this legal innovation may shape how identity, AI, and privacy intersect in the years to come.
However, on the broader aspect of individuals’ identity use, one must also consider the commercial implications. With the copyright status imminently available to individuals’ identity in Denmark, will it open the floodgates for individuals to commercialise their own face, voice, and likeness, offering them to the highest “bidder”? As enlightened by the SAG-AFTRA 2024 strike, the growing use of AI technology reproducing individuals’ likeness demands the need for clear guidelines, notably on consent and compensation. Indeed, the commercialisation of one’s identity introduces new risks with regard to exploitation, consent, and the erosion (forced or otherwise) of personal autonomy.
If you require assistance with any aspect of intellectual property, please contact Anouk Bruel or Wing Ming Choi on +44 (0)204 600 9907 or email info@ilaw.co.uk.