As deepfakes become increasingly accessible and realistic, countries are scrambling to find the right legal response. In this post, Arnav Mathur examines Denmark’s proposed copyright-based approach and what India could learn from it for the ongoing debates on regulating AI-generated content. He argues that Denmark’s proposal to regulate deepfakes through copyright law is conceptually flawed and risks doctrinal incoherence, overreach, and the commodification of identity. Arnav is a 4th-year B.A. LL.B student at NALSAR University of Law, Hyderabad.

Mask Off: Copyright, Deepfakes and the Commodification of the Self
By Arnav Mathur
Deepfakes, hyper-realistic imitations of a person’s face, voice, or body, have moved from forum curiosities to real-world harms. We have seen a fake clip of President Zelenskyy surrendering, AI-generated videos of Indian actors, and uses in elections. The toolset is extensive and getting easier to use, from DeepFaceLab and DeepSwap for hands-on face swaps, to Reface, HeyGen, and Pika Labs for quick phone edits, to ElevenLabs for voice cloning, and newer video engines like Runway Gen 4, Synthesia, D-ID, Meta’s Vibes, and OpenAI’s Sora for prompt-driven generation. With that power on ordinary laptops and phones, anyone’s likeness can be copied, manipulated, and shared in minutes. The law, inevitably, is playing catch-up.
India’s response has largely protected celebrities, relying on personality and publicity rights, passing off, defamation, and injunctive relief to secure rapid takedowns. Denmark, by contrast, has proposed a legislative amendment that would extend protection to everyone, not just public figures. The June 26, 2025, amendment bill seeks to insert two new provisions into the Danish Copyright Act (“Bill”). The first protects realistic digital imitations of a performer’s appearance or expressive acts. The second, more expansive provision, prohibits making publicly available any realistic digital imitation of a person’s face, voice, or body without their consent. The protection would last for fifty years after the person’s death and is designed to work in conjunction with the EU’s Digital Services Act, facilitating rapid online takedowns. It also includes limited exceptions for parody, satire, caricature, and social or political criticism, though these are narrowly framed.
However, is copyright law the answer to the deepfake problem, and can copyright doctrine coherently and legitimately regulate deepfake harms? This post critically analyses the Bill and argues that copyright is a poor fit, as it risks doctrinal incoherence, commodification of personhood, and the chilling of protected expression.
This discussion is particularly relevant for India, where a government-appointed committee is exploring amendments to the Copyright Act to address AI-generated works. Any move to import copyright-style controls over identity, similar to Denmark’s approach, would risk replicating the same doctrinal and policy pitfalls, especially given India’s already expanding judicial protection of celebrity likeness.
The Structure and Scope of the Bill
At first blush, the Bill appears targeted and pragmatic. It introduces Section 65a for performers’ replicas and Section 73a for personal physical characteristics, limits the prohibition to making available to the public (dissemination) rather than criminalising private possession, and includes carve-outs for parody, satire, and research. The English translation (available on InfoJustice) states:
“Protection against realistic digitally generated imitations of personal characteristics
§ 73 a. Realistic digitally generated imitations of a natural person’s personal, physical characteristics must not be made available to the public without the consent of the imitated person.
Subsection 2. Subsection 1 does not apply to imitations primarily intended as caricature, satire, parody, pastiche, criticism of power, societal criticism, or similar unless the imitation constitutes misinformation likely to seriously harm the rights or substantial interests of others.
Subsection 3. Protection under subsection 1 lasts until 50 years have elapsed after the death of the imitated person.”
Functionally and institutionally, the Bill erects a copyright-style regime over identity. Law is defined by its remedies and institutional pathways as much as by its labels. Inserting these provisions into the Copyright Act creates an exclusive entitlement to prevent dissemination, attaches civil remedies such as injunctive relief and damages, and sets a life-plus-fifty post-mortem term. Those three features: (1) exclusionary control over dissemination, (2) private enforcement remedies, and (3) long duration reproduce the mechanics of copyright even if the text carefully avoids calling a bodily feature a “work of authorship.”
That construction matters for three reasons. First, it severs intellectual-property exclusivity from the core justificatory principles of copyright. Copyright is built to reward creative authorship and to protect fixed expressive works. A nose, an accent, and a laugh are not selected or arranged by a creative mind in the requisite sense; they are biological and social facts. Treating such natural features as if they satisfied originality, authorship, and fixation distorts the doctrinal logic. Either persons become authors of uncreated features, which is conceptually incoherent; or the law must invent a new category that severs the right from authorship. Both moves are theoretically unstable.
Second, embedding identity protection within copyright’s institutional enforcement pathways matters in practice. The government’s intent to leverage the DSA and existing intermediary takedown procedures is consequential, as platforms have built high-velocity removal workflows for copyright claims that tend to err on the side of removal. Channelling identity disputes into that machinery substitutes private, platform-driven enforcement for the deliberative safeguards usually required in dignity-oriented law. It effectively deputises automated, market-oriented takedown systems to adjudicate what are fundamentally questions about autonomy, dignity, and free expression.
Third, the Bill’s temporal and transfer rules replicate copyright’s market logic, thereby enabling commodification. A fifty-year post-mortem term and the prospect of assignability and licensing mirror neighbouring rights more than personality or privacy law. As P. Bernt Hugenholtz argues, copyright presupposes assignability and market exchange; repackaging identity in this form opens the door to commercialisation and third-party exploitation. Estate managers, agencies, or corporations could acquire and monetise digital replicas of a person, the exact opposite of the Bill’s stated aim to prevent misuse. The experience of the U.S. right of publicity, where celebrity likenesses became monetised assets traded by estates and media companies, shows how protective instruments can morph into licensing instruments.
The Realism Threshold and Limitations
The Bill’s substantive trigger is “realistic digital imitation.” The realism threshold is unstable due to its subjectivity, context dependence, and susceptibility to gaming. What is “realistic” to one viewer may be perceived as plainly artificial by another: cultural context, age, and media literacy shape the perception of deepfakes. Liability thus becomes a judgment-laden inquiry ill-suited to automated takedown flows. It also invites circumvention as offenders can slightly stylise or lower fidelity, attach disclaimers, or invoke parody to escape the realism threshold. Worse, many deepfake harms do not depend on photorealism. A grainy composite or a crude voice clone can devastate dignity and enable fraud. By prioritising aesthetic fidelity rather than outcome, the law risks missing the actual axis of harm.
Normatively, a copyright-style solution fares poorly against standard theories for intellectual property. Utilitarian arguments fail because granting exclusivity over natural features produces no marginal incentive to create; instead, it raises transaction costs and chills expressive use. Locke’s labour-mixing account collapses because individuals do not create their cheekbones or voices through labour. Personality-based justifications recognise the dignity interest but caution against turning personhood into a marketable good; personality theory supports protection of dignity but not commodification via assignable property regimes. Social-planning theory would require proportional rights that sustain cultural diversity; by privileging control and licensing, the Bill risks narrowing rather than enriching public culture.
Conclusion
Deepfakes are a challenge to identity, trust and legal order. The Bill’s attempt to take the mask off by grafting “realistic digital imitation” protections into copyright law is a bold move. However, the Bill’s copyright framing creates a doctrinal mismatch with three consequences: (1) it severs the right from the authorial justification that legitimizes exclusivity; (2) it enables commercialisation of identity through assignable, long-term monopolies; and (3) it saddles courts and platforms with awkward, perceptual inquiries ill-suited to copyright methodology with no traditional theoretical justifications. Moreover, its realism threshold is unstable, subjective, and easily evaded, and it fails to account for non-photorealistic harms.
For India, the lesson is cautionary. Indian Courts have been quick to protect celebrity likeness through expansive injunctions, but Denmark’s experiment shows the danger of going too far in the other direction – of turning protection into property. India appears to be moving towards a regulatory, accountability-based response. MeitY’s October 2025 proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, propose a definition of “synthetically generated information,” mandatory labelling and metadata traceability for synthetic media, protections for intermediaries acting in good faith, and enhanced verification and disclosure duties for significant social media intermediaries. These measures, if implemented with strong procedural safeguards against over-removal and commercialisation, offer a preferable, harm-oriented template that protects dignity without embedding identity in assignable, long-term proprietary rights.