Adding to the steady stream of critiques on the DPIIT Working Paper on the interface of Copyright and AI, my post focuses on the proposed statutory recommendations, specifically, on how the proposal seeks to create new legal rights with no analysis of the corresponding duties and liabilities that will be triggered.

In February, 2024 the Ministry of Commerce & Industry was sure that the existing IPR regime was well-equipped to protect AI generated works. Generative AI (GenAI) developers would need permission from copyright holders for the commercial use of works when beyond Section 52’s fair dealing exceptions. Individual rights-holders would exercise their private property rights in case of digital circumvention or other infringements. On December 8, 2025 the DPIIT released the first of its two-part Working Paper (WP) titled ‘One Nation, One License, One Payment: Balancing AI Innovation and Copyright’. Now the DPIIT finds our copyright law insufficient and sets a 3-year timeline for its statutory overhaul to make accommodations for GenAI.
In this post, my central (and doctrinal) question is concerned with the WP’s treatment of copyright law and policy – what would a GenAI and Copyright policy look like if it did not fixate on large-scale statutory amendments? I discuss five aspects: failure of similar model propositions, terrible short-sightedness of the proposed legal changes, how this proposal curbs progressive copyright practices, may promote unnecessary litigation and is economically counterproductive.
As a caveat, my comment addresses possible legal implications resulting from imposition of the WP’s claimed amendments; the larger question of why the DPIIT committee is going so far out of the way to incentivise an intermediary, and not an author, in the works production process merits a separate discussion. A concise reading list at the end of this post references other critiques of the WP.
First a quick appraisal of proposed statutory changes – the WP recommends a ‘hybrid model’, i.e, AI developers get a mandatory blanket license to use all lawfully accessed copyright-protected works as training data, with no opt-out for rights-holders. Compensation would be set by a centralised CRCAT, distributed through registered Copyright Societies and CMOs, calculated from profit revenues, and paid annually to registered works only. It repeatedly emphasises the elevation of AI developers’ access to copyrighted content as a “matter of right”, which will be sustained by mandatory licensing that restricts Section 14’s reproduction rights (again, a disputable idea but this is what the WP implicitly concludes), for the trade-off “right of fair compensation”. It also proposes amendments in the Act and Rules to establish the CRCAT, registration-based ‘works database’, annual payment of royalty, triannual revision of rates, and judicial review.
- More liberal versions of the model have failed.
The WP may be novel for using the hybrid model as a formal policy proposal, it is, however not the first time that such a model has been conceived of and studied. For instance, it bears close resemblance to a failed draft Spanish Royal Decree. While more liberal than the WP, the decree (based on an extended collective license, or ECL, framework with opt-out option) was met with severe backlash. The decree – suffering from the same extreme pro-tech bias of the WP – was drafted on the assumption that all rights-holders would be willing to authorize the use of their works in AI training datasets in service of Spain’s national AI policy goals. Like the Spanish draft, the WP repeatedly generalises (and reduces) the incentive behind human authorship to a want of receiving remuneration and assumes that order will be restored as long as original human creators are promised some measure of economic benefit. Ad-hoc survey in Spain showed that compensation was not a motivator in cessation of rights to CMO-centric blanket licenses, rather, authors would prefer to not have their works licensed by AI developers at all.
- The WP is not just prioritising access; it is policing behaviour.
The DPIIT committee takes an unusually deterministic stand in the incentive-innovation debate by removing the opt-out mechanism. To ensure this, an expanded copyright will give AI developers the power to access data “as a matter of right”. The WP is silent on how the right might be enforced. So far, a paywall is the only option to resist automatic scrapping of data. But what happens when a rights-holder who can’t use a paywall, instead resorts to coding protocols to deter scrappers – do AI developers get to use their ‘matter of right’ power to force compliance? Is the automatic subscription to the hybrid model supposed to deem the otherwise accepted practice of using ‘robots exclusion protocols’ illegal?
The WP also brings in the question of intent by creating an analogy with SEP licensing where both participants are judged for their willingness to honour the FRAND bargain. If enquiry of intent is introduced then the only other real ‘opt-out’ solution, driven by a threat of litigation, would be for unwilling rights-holders to go offline. Surely these are not right-and-remedy consequences that our copyright law intends.

Another troubling overlap is that between public copyright licences like creative commons and the hybrid model. Authors who have pledged their works to variations of CC (such as CC BY) or other open sources licenses have voluntarily chosen to enable free distribution for commercial use. By creating the statutorily enforced trade-off ‘legal right to fair remuneration’, the blanket licensing scheme of the WP conflicts with authors’ voluntarily chosen ideals of public copyright licensing obligations.
- Courts as rate-setting venues creates issues of forum-shopping and race to the bottom.
The WP leaves an open scope for judicial review of rate-setting committee’s decisions. In practice this will likely play out as litigation on CRCAT’s paternalistic valuation versus what the rights-holders could have earned as royalties in the free market. Additionally, the Committee’s suggestion of an “AI Training Royalty Distribution Policy” by CRCAT that might consider ‘the general standing or market value of the work’ based on ‘… market share, viewership data, etc.’ to create subjective value-based high, medium and low categories is problematic in itself. Divorced from the reality of functioning of GenAI, this creates a space to reinforce power dynamics of well-organised media sectors to negotiate higher rates. Xiyin Tang explains how this may eventually create bilateral oligopoly.
- (Potential) Undue rights extension in favour of rights-holders.
The annual royalty payment seems to lock the AI developers in a cycle of perpetual payments originating from a singular instance of training the AI model. The Spanish decree discussed above had a three-year cap on payments, the WP mentions none. The statutory connection that the WP draws between copyright and training data may not hold especially if the consensus of pending litigations or further advancements in AI research, like that of Neel Nanda’s which studies that ‘grokking’ includes a ‘clean up’ phase where the AI model discards original training input to function on generalised patterns; proves absence of the direct correlation that copyright law requires.
- Model previously discussed in academic research.
Finally, the WP’s claim of the hybrid model promoting economic efficiencies and public interest, thus necessitating statutory amendment, is contestable. Kaplow and Shavell noted that ‘a fundamental legal problem is whether property rights should be protected by property rules or by liability rules.’ The WP stresses, and at least in this aspect it is correct, that the transaction costs in voluntary licensing (private exercise of property rules) will be prohibitively high. The draft then assumes the fact of infringement and clearly chooses state intervention in setting payment of objectively determined damages, here, in the form of royalties (liability rules). The problem is that the proposal fails at the next step: as Calabresi and Melamed explain, efficiency is not the sole ground and imposition of liability rules should follow distributive goals.
By game theoretical analysis the hybrid model is a ‘negative-sum game’, i.e. one side will always have an advantage and the other side may sometimes win, but on average lose. AI developers with their right to access will scrape all the data that they can, the rights-holders may get compensated if they had access to CMOs; the unorganised cultural sector that does not have this access will miss out on its legally-ensured compensation.
On a more rigorous economic viewpoint, Joshua S. Gans explores what could be the probabilistic best model to promote incentive-innovation efficiency. In both proposals, the AI model allowed to train on all available content and where rights-holders cannot opt-out, is socially desirable. However, Gans finds ‘more favourable balance of incentives’ will be achieved if the copyright-holders’ right to claim infringement with compensation is maintained. Insurance against ‘threat to original content providers’ commercial activities’ must be addressed for an optimal policy solution.
- Other prominent issues
Not only does the hybrid model fail to alleviate rights-holders’ fears of regression, its predicted statutory implications create an environment of increased legal uncertainty. The regulation of copyright law in digital spaces has often been a question of access to content. The WP believes this as prioritising intermediaries in the creative process over original rights-holders, and has no engagement whatsoever on incentivising potential end-users of AI systems.
Meghna Bal voices apprehension over power centralisation with CMOs given their history of extortion and limited representation. Prashant Reddy notes constitutional invalidity of bureaucratic intervention, power dynamics in CMO representation and a general lack of administrative capacity. Bharath Reddy and Mihir Mahajan discuss issues of practical implementation of the proposed licensing regime, pointing out severe loopholes in identification, attribution, fairness in rate-setting. Other reporting (here, here and here) echo the WP’s pro-AI developer bias, lack of Indian representation, removal of choice of opt-out and unfeasible cross-border implementation. Swaraj Barooah and Akshat Agrawal question the very premise of DPIIT’s problematisation of the AI-Copyright interface, persuasively discussing its lack of jurisprudential grounding and its potential to tragically suspend conventional understanding of copyright law and policy.