On the recent proposed amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) proposed by MeitY, Akshat Agrawal explains how, despite a sound regulatory intent, the proposed amendments suffer from a few conceptual problems that may render them either unenforceable or unworkable. Akshat is a Litigator and is the Founder and Counsel at AASA Chambers, where he provides Counsel and Of Counsel services. He is also a PhD candidate at the University of Cambridge. His previous pieces can be found here. He adds the following disclaimer: After some discussion around an earlier, longer draft, I would also like to acknowledge the usage of Claude.ai for helping me re-frame the draft more succinctly and in a reader friendly manner. Views expressed here are personal.

Unpacking MeitY’s Proposed IT Rules Amendments: Between Regulation and Practicality
By Akshat Agrawal
The Ministry of Electronics and Information Technology has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing obligations for “synthetically generated information.” The policy motivation is understandable. Deepfakes pose genuine risks to public trust in democratic discourse. Requiring transparency can help address deception concerns (more recently raised in the right to publicity claims – albeit in a commercial context). Whether labelling/watermarking is the best way is debatable. Irrespective, the proposed amendments suffer from conceptual problems that may render them either unenforceable or unworkable.
Amendment No. 4
Amendment No. 4 introduces a new Rule – 3(3), which requires “intermediaries” whose computer resources enable the creation of synthetically generated information to label all outputs with permanent, unremovable metadata. For visual content, such labels must cover at least ten percent of the surface area. For audio content, labels must occupy the initial ten percent of duration. The provision applies to any “intermediary”.
This amendment only practically functions if generative AI developers qualify as intermediaries under Section 2(1)(w) of the Information Technology Act, 2000. That provision defines an intermediary as any person who “on behalf of another person receives, stores or transmits that record” with respect to electronic records. The statutory intention reveals a conduit model, where the intermediary facilitates transmission of pre-existing content without substantive alteration or creation.
When a user prompts a Generative AI model, the system does not retrieve and transmit a pre-existing record. It generates output by synthesizing patterns learned from training data through algorithmic architectures. The training data that was curated and fed into the system is not “that record” which emerges as the response. Input and output are distinct electronic records, and in fact the Output (as revealed in the Copyright debates is often a significantly transformed record from any single individual input). In Google LLC v. DRS Logistics, the Delhi HC rejected Google’s claim to intermediary status for its AdWords program despite the fact that advertisement selection and placement were entirely automated through algorithmic processes, and held: “the architect of its programme and operat[ing] the proprietary software” meant that algorithmic automation did not render Google a mere conduit. The Court found that Google’s use of quality scores, click-through rate predictions, and bid optimizations to determine which advertisements appeared constituted sufficient editorial control to deny safe harbor protection under Section 79 of the IT Act. If algorithmic curation of existing advertisements defeats intermediary status, the reasoning would seem to apply with even greater force to systems that generate entirely significantly transformed content through algorithmic processes.
However, a counterargument exists that generative AI tools could qualify as intermediaries/ or the developer lacks a legal duty analogous to that imposed on a publisher, and deserves similar safe harbour as conduits get, precisely because of their self-evolving nature. These systems are initially trained using datasets that are influenced and curated by the developers of the model. The developers make choices about what data to include, how to clean and filter it, and what reinforcement learning techniques to apply. But beyond that initial training phase, the model’s ongoing learning depends on user inputs and interactions that occur outside the control of the developer. The model continues to evolve, refine its parameters, and adjust its outputs based on the prompts it receives and the patterns it identifies in user interactions. This creates a situation fundamentally different from traditional publishing.
If a generative AI tool produces defamatory output about an individual based on patterns it learned from user interactions after deployment, tracing liability back to the developer and refusing to exempt them creates a situation where developers bear responsibility for something the tool’s self-evolving nature produced rather than something they directed or controlled. This is not analogous to a traditional publisher relationship. There is no editorial control in the conventional sense. The developer is merely enabling the model to learn how to learn – providing the architecture and initial parameters within which learning occurs. Thereafter, what specific content the model gets exposed to, what patterns it identifies in user behavior, and what outputs it generates in response to particular prompts are often beyond the control of the developer.
This creates ambiguity. One can construct reasonable arguments on both sides of the intermediary classification or the safe harbour question. What cannot be reasonably disputed is that Amendment No. 4 imposes obligations on entities whose legal classification under Section 2(1)(w) remains legislatively unresolved. The important point is this cannot happen through the Rules (i.e., a subordinate legislation), when intermediary is defined in the Act.
Moreover, the amendment cannot coherently be interpreted to mean that an entity which qualifies as an intermediary for one transmission stream (for example Amazon marketplace) automatically has obligations under Amendment 4 for another stream (for example Amazon Rufus) where it is not clear whether it does qualify as an intermediary or not. That is not the answer to this ambiguity.
Amendment No. 2
Amendment No. 2 introduces a new definition of “synthetically generated information” as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.” This definition is extraordinarily expansive. Under its plain reading, any photograph edited using Instagram’s algorithmic filters qualifies as synthetically generated information requiring permanent watermarks. Medical images enhanced through AI-assisted diagnostic tools that help radiologists identify potential pathologies qualify.
The Rules make no meaningful distinction between applications where algorithmic generation, or modification creates genuine risks of deception or harm and benign uses where such risks are minimal or nonexistent. Generative AI tools used in filmmaking and visual effects production to enhance cinematography face identical labeling requirements as tools specifically designed to generate non-consensual intimate imagery or fake videos of public figures making statements they never made. The blanket approach ignores context, purpose, and actual potential for harm.
Amendment No. 3 – The Safe Harbor Contradiction
Amendment No. 3 introduces a proviso to Rule 3(1)(b) providing that removal or disabling of synthetically generated information undertaken as part of reasonable efforts or in response to grievances “shall not amount to a violation” of the conditions for safe harbor eligibility under Section 79(2) of the IT Act. The stated intention is to encourage platforms to moderate AI-generated content without risking the loss of liability protections that safe harbor provides.
This provision creates an apparent logical contradiction with the statutory framework it purports to implement. Safe harbor protection under Section 79 exists precisely because intermediaries purportedly lack the capacity or obligation to pre-screen or editorially assess the vast volumes of third-party content flowing through their platforms. That is the foundational rationale for granting immunity from liability. Section 79(2)(b) expressly conditions safe harbor eligibility on the intermediary not initiating transmission, not selecting receivers, and not selecting or modifying information contained in transmissions.
Amendment No. 3, however, encourages platforms to do the opposite. By providing that content removal decisions will not vitiate safe harbor status, the amendment encourages platforms to make determinations about whether AI-generated content falls within specified prohibited categories. If platforms possess the capacity to identify and make such determinations, they demonstrate editorial judgment capability. If they can identify synthetically generated content that violates specified categories and remove it, they possess awareness of its presence on their platforms. Yet Section 79(3)(b) expressly provides that safe harbor protection does not apply where an intermediary fails to expeditiously remove or disable access to material after receiving actual knowledge that such material is being used to commit an unlawful act.
This raises a serious question about the scope of subordinate legislation. Rules made under statutory authority cannot override or contradict the parent statute. Can Amendment No. 3 provide that platforms exercising editorial judgment will not lose safe harbor protection when Section 79(3)(b) in the statute itself contemplates that actual knowledge removes such protection? The proviso attempts to preserve safe harbor while encouraging conduct that may generate the very awareness that Section 79(3)(b) identifies as lifting immunity. This creates genuine uncertainty about whether the Rules operate within the authority granted by the enabling statute.
Amendment No. 5
Amendment No. 5 introduces new Rule 4(1A), targeting significant social media intermediaries. It requires Significant Social Media intermediaries (SSSIs) to require users to declare whether content is synthetically generated, deploy reasonable and appropriate technical measures to verify such declarations, and label verified AI-generated content appropriately. The proviso states that where a platform “knowingly permitted, promoted, or failed to act upon such synthetically generated information in contravention of these rules,” it shall be deemed to have failed to exercise due diligence.
The knowledge standard embedded in the proviso is conceptually sound, but the amendment imposes overbroad verification responsibilities without providing a clear understanding for what constitutes adequate discharge of those responsibilities. SSSIs are told to obtain user declarations, deploy technical verification measures, and label confirmed AI content, but the amendment provides no guidance on what “reasonable and appropriate technical measures” means in practice. When is verification sufficient? What level of accuracy must detection tools achieve? How should platforms handle cases where technical verification contradicts user declarations, but the verification tools themselves have known error rates?
This creates uncertain liability exposure. SSSIs face significant penalties for failing to exercise due diligence, but the amendment does not define the threshold for reasonable conduct with sufficient precision to guide compliance decisions.
Duty (and subsequently liability for breach) for AI-generated content should turn on whether the developer took all technologically available, practically feasible, and objectively plausible steps to prevent harmful outputs. This bright-line test draws from principles that Indian law already recognizes. Product liability jurisprudence employs risk-utility balancing, which weighs whether the burden of eliminating harm is outweighed by the gravity and probability of harm and the utility of alternative designs. Provision of a service through a product that relies on algorithms ought to be distinguished from negligent facilitation. A context-sensitive application based on actual risk profiles rather than treating all synthetically generated information identically ought to be devised. Without such a framework, Gen AI tool developers and SSSIs face impossible expectations – being asked to verify and moderate without clear guidance, which incidentally is the point of delegated legislation.