Adding to the discussion on proposed deepfake regulations in India (see here for a post on this by Akshat), Denmark (see here for a post on this by Arnav), and the Netherlands, Shama Mahajan analyzes the approaches adopted by these countries and examines the challenges of selecting appropriate legal frameworks to govern deepfakes. Shama is an LL.M. Candidate at the National University of Singapore, pursuing her masters in Intellectual Property and Technology Law.
Interested readers can tune in to the first episode of Let’s IPsa Loquitor to learn more about the regulation of deepfakes and how copyright and other laws interact with it.

Deepfake Regulation: Same Problem, Different Approaches, yet none is an Error-free Resolution!
By Shama Mahajan
The Indian Ministry of Electronics and Information Technology has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, in a bid to regulate the ‘deepfake’ problem. The proposed changes have been evaluated and criticised for their broader scope and functional challenges. India is not the only country, however, that is in a bid to regulate deepfakes. The Netherlands and Denmark have also proposed the regulation of deepfakes. The route that these two countries seem to have taken is to amend their ‘Copyright Acts’. This post aims to analyze the two approaches and examine the problem of choosing these relevant legal frameworks as the drivers for deepfake regulation.
Information Technology Act, 2000(IT Act): How much can the Drafters Predict?
Going back in time to tracing the origins of the IT Act, the objective behind its enactment was ‘to give a fillip to the growth and usage of computers, internet and software in the country as well as to provide a legal framework for the promotion of e-commerce and e-transactions in the country’. With time, and the dynamic changes in the nature of technology and the purposes for which it was employed, the IT Act was amended from time to time to accommodate these changes and ensure maximum governance. To the extent that, IT Act became the means of recognising privacy and data protection, to an extent, until a specific legal framework in this regard was enacted in 2021. However, the recent IT Amendments push us to ask the question, to what extent can a legislation be expanded to include subject matters which its initial structure and its drafters could not have possibly envisaged. Let us not forget that legislation can be forward-looking only to an extent. Indeed, Artificial Intelligence (AI) is an advanced form of Information Technology, but did the Act predict the roles different players within this ecosystem would play with the advent of AI? I argue that the IT Act is not a suitable framework for regulating deepfakes simply because its original structure and the roles it assumes that different stakeholders would play have drastically changed.
Computer System, AI System, and Intermediary: Processing the Process and Roles
The proposed 2025 IT Amendments for the regulation of synthetically generated information, to an extent, followed the EU AI Act phraseology in defining what constitutes ‘synthetically generated information’ and introducing the labelling framework. However, what it disregards entirely is the difference between a ‘computer resource’ as defined under the IT Act and ‘AI System’ as defined under the EU AI Act. The amendment seeks to categorise all AI applications as ‘some sort of software/tool/application and thereby include them under the sweep of a computer system. The biggest issue with this sweeping is the failure to recognise that ‘AI is a system that is designed to operate at various levels of autonomy’, which a computer system is not capable of doing.
The nature of tasks that a computer system is envisaged to do under the IT Act, refers to logical, arithmetical, data storage and retrieval, communication control, and other such related functions. In this definition, the output is predetermined and certain as the programming is predefined. An image editing tool, for example, is certain to allow a user to input an image, make changes to it, and receive an edited output. The processing is pre-programmed and hence, it will not perform outside this program even if you instruct it to. An image editing software will not create a new image. This is referred to as traditional computing.
In contrast, outputs of AI systems are not predetermined, nor are they pre-programmed to function in a specific way. Machine learning involves computation that allows it to adapt, learn, and make decisions based on this learning, i.e., cognitive tasks rather than logical and arithmetic. It depends on the nature of the command, and the output depends on the nature of the model and the training it underwent. An identical command to ChatGPT can give two different results. AI systems continue to improve post-deployment basis user inputs, something that the user does not regulate or expressly intend, nor does the deployer control.
It is hard to read cognitive tasks within the ‘computer system’ definition, applying the ejusdem generis principle. In the absence of a revised definition of ‘computer system’, the amendment (in addition to its other drawbacks) simply aims to be a stopgap measure, the effectiveness of which seems difficult to fathom. The legislative shortcut fails to account for the technological realities of AI.
Assuming that a computer system is read expansively to encapsulate AI and AI tools/software, the challenge still remains, as the onus of compliance is made subject to the AI developers qualifying as an ‘intermediary’. This issue is further pronged into understanding the AI supply chain. AI developer/provider, AI distributor, and AI deployer are the basic possible entities that are involved in the AI Supply chain. The role of an AI developer/provider in an AI supply chain is not limited to a platform of conduit. Rather, in more than most of the cases, AI developers do not perform any of the functions that the ‘intermediary’ under the act is envisaged to perform. Again, this further goes on to strengthen the argument as to why an indefinite expansion of one act to resolve all technology-related problems is practically incoherent. The drafters at the time of drafting the definition of ‘intermediary’ could not have possibly accounted for the role of AI system providers, and an amendment to its subordinate legislation is surely not the way to resolve the impasse.
Denmark & Netherlands: Copyright’s Attempt to Regulate Deepfakes
In 2025, Denmark has become one of the first EU country to tackle deepfakes by expressly amending its copyright legislation, which proposes to grant to the individuals a copyright in their image and likeness followed by the Netherlands. The amendments grant natural persons an exclusive right to control the making available deepfakes of their persona in the context of neighbouring/related rights.
These amendments have an effect of explicit inclusion of personality/image rights within copyright law to regulate deepfakes, giving every individual an exclusive right to create a deepfake of their own. In other words, every person has a copyright in their digital likeness/identity. This approach again suffers from various legal pitfalls. Firstly, the regulation of deepfakes stems more from its harmful use and resulting consequences rather than mere existence in and of itself. Granting copyright in the creation of Deepfakes assumes that individuals, in exercising this exclusive right, will not make a deepfake of their own, which can be potentially harmful. Illustratively, if I make a deepfake of myself inciting communal hatred or spreading fake information, I am well within my copyright to do so. The recourse in this case would be the other laws i.e. civil and criminal, that govern such conduct.
Secondly, the rationales of copyright legislation are different in that they aim to protect ‘creative expression of an individual’. A digital likeness of an individual is not a creative expression of the individual but simply an extension of the individual themselves, which is an aspect of privacy rather than copyright.
Thirdly, intellectual property aims at creating proprietary rights in creations stemming from human intellect. By granting copyright to an individual in their personality, the law is seeking to create a proprietary interest in self, which is legally untenable under any relevant justifications of IP. It would mean that a photographer will have to pay a license fee to take a photograph or record a person, not just seek consent. This is akin to saying, I am travelling in a public bus, and those seeing me are violating my privacy. If this sounds absurd, then so should having copyright in one’s own personality. This approach is distinct from ‘publicity rights’, which aim to regulate unauthorized commercial exploitation of one’s personality.
Conclusion
To simply state, the problem is an attempt to avoid law making and try to address the situations by amending. The approach of reading something into the existing law suits the judiciary because inherently by virtue of separation of powers, they are required to avoid legislative functions. However, when the legislature also resorts to this approach, it is unclear what the reasons are. Maybe an attempt at legislative creativity? However, it would not be apt to forget that a law when framed is attuned to the temporal nuances, societal conditions and a limited future prospect. It is not made as a forecast of the future and thus its ability to accommodate the future is limited. Legislatively, it is important to recognise these limitations. India, Netherlands and Denmark if they have the intent to regulate Deepfakes, then they will have to initiate the legislative process by first appreciating where the existing law fails. It is these gaps that the new legislation will have to fill. The proposed amendments of themselves also have their own problems; however, I feel that the overarching problem is the very idea that an amendment can be a quick-fix to the legislative deliberations and reassessments.