Responding to Roberto Dini’s recent piece suggesting that all AI inventions are ab initio obvious because they are reproducible for an AI-enabled Person Of Ordinary Skill In The Art (POSITA), Krishna Jani and Tania Aithani explain how this understanding may not work owing to the heterogeneity and data-dependence of modern models. Krishna and Tania are fourth-year law students at the Institute of Law, Nirma University. They keenly follow and write on the evolving challenges at the nexus of technology and intellectual property.

Obvious to Whom?- Analysing Dini’s AI Dilemma
By Krishna Jani and Tania Aithani
The intersection of artificial intelligence and intellectual property law continues to generate complex legal challenges that demand careful scholarly attention. Roberto Dini’s recent editorial in the Journal of Intellectual Property, Law and Practice exemplifies this complexity. Rather than simply reiterating that AI lacks legal standing as a ‘person’ under current IP frameworks, Dini presents a more sophisticated challenge. He argues that even if the personhood barrier were eliminated, AI-generated inventions would fail to satisfy the non-obviousness requirement fundamental to patent law. He assumes that the advent of autonomous AI inventions would also necessitate replacing the established ‘person skilled in the art’ standard with an AI-equivalent benchmark for assessing novelty and obviousness. This substitution, Dini contends, would render all AI innovation obvious and thus unpatentable.
In this article, we analyse the claims made by Dini and the principal assumptions on which they are founded. After the laying the arguments made in the editorial, we analyse the strength of their claims and the possible resolution of the questions raised in the discourse.
Is Everything Reproducible?
Dini’s October 2025 editorial argues that while an AI’s solution to a technical problem may seem non-obvious at first, it ultimately isn’t. The argument rests on two grounds. Firstly, an invention wholly developed by an AI must be judged for non-obviousness by an equally capable AI. This is necessary, Dini explains, because a human may find all of the model’s inventions non-obvious due to the ‘superior capabilities’ of an AI inventor. Secondly, now when an AI based POSITA addresses the technical problem—on which the AI inventor appears to have ‘innovated’—the AI POSITA ‘will find the same solution’ as the inventor due to ‘reproducibility’ of outcomes in computational systems.
Reproducibility, Dini contends, applies to AI models squarely, such that given the same technical problem and subject to the same ‘prior art’ the two models will produce the exact same invention. For AI merely ‘reshuffles’ prior art and lacks a genuine ‘flash of genius’ (sic). Dini founds his arguments on the edifice of an underlying ‘homogeneity’ between all AI models. This homogeneity is both internal and external, that is, in their algorithmic logic and in the manner in which the model is trained. However, this assumption is imprecise.
Firstly, dozens of AI/ML approaches (Random Forests, Neural Networks, Gradient Boosting, etc.) are applied differently across diseases, datasets, and populations, yielding varied results. This diversity undermines the assumption of uniformity. AlphaGo cannot design novel proteins and AlphaProteo cannot play Go. The internal makings of each model are heterogeneous, non-substitutable and context dependent. For an AI POSITA to find a reproducible output, they will need to have the exact internal structure.
Secondly, the critical determinant in the output of any computational system, including advanced AI models, is the dataset that it is permitted to work with. For two AI models to independently generate an identical solution to a technical problem, they would require exposure to the exact same dataset during their training and operational phases. This step is where the bulk of the R&D expenditure goes while building new models. For instance, AlphaFold—a model that predicts protein folding—was built using a supervised model specifically trained on the Protein Data Bank. The process through which the model was trained to perform a singular task with near perfect accuracy differentiates AlphaFold from other Deep Learning Models. An AI POSITA will have to replicate the method and data involved in the training which enabled innovation, to reproduce the same output as the ultra-specialised models. It is absurd to propose that AlphaFold, as the inventor, should also be the standard for determining the non-obviousness of its own creation. From the inventor’s perspective, the invention would certainly appear obvious. The rule that an inventor cannot be the standard for determining non-obviousness applies to all inventions, AI or not. Allowing an inventor’s perspective to be the benchmark would invalidate the ‘inventive step’ requirement universally
Thirdly, an unsupervised black-box model may create its own novel patterns and connections in the training data which can differ substantially from one unsupervised ML algorithm to another. Two such models with the same internal design and external training data will still come up with substantially different outcomes to a given problem. The reproducibility of output in these conditions is based on mere probability and is uncertain. Reproducing an unsupervised model’s innovation is therefore a game of chance.
We believe that this introduces a critical distinction that dismantles Dini’s argument when considering the patent disclosure process. Having ‘access to the same data’, contrary to what Dini contends will not lead two AI models to come to the same conclusion. While specific input data used by a model to generate the final patentable solution would likely be disclosed as part of the patent application, the vast and proprietary foundational training dataset that shaped the models problem-solving skills would not. This foundational data, which is the primary source of its inferential power, may be protected as a valuable trade secret. Therefore, the notion of an equivalent AI ‘POSITA’ arriving at the same conclusion is computationally impossible.
The protection of training data as a ‘trade secret’ offers selective and qualified protection of those datasets and algorithms which were instrumental in enabling the model to innovate on a given subject matter, or technical problem. Though, as the enablement standard rightly prescribes, once a model is able to arrive at its own ‘invention’ any data essential to enable the invention to be ‘worked’ by a person ‘possessing average skill in, and average knowledge of, the art to which the invention relates’ must be clearly disclosed. However, this is not inclusive of the disclosure of its foundational training data (some of which may also involve proprietary datasets) as long as the same does not have a nexus with the invention.
Perhaps the analogy of a human inventor would help substantiate this claim. The AI model’s foundational training data is analogous to a human inventor’s tacit knowledge. It shaped the model’s ‘skill’ and not the invention itself, like the innumerable years Tesla or Edison spent researching, reading and experimenting. Therefore when considering the patentability of the model itself, the enablement standard will require a disclosure of the training process, and respective training data, in order to realise the ‘invention’ without sufficient undue burden.
Au contraire, to disclose the manner of ‘working’ an invention developed by the AI-inventor, the patent claim would only disclose, specific datasets relevant to the invention including prompts, parameters, weights (where necessary), and the specific input data used to generate the inventive output. This is analogous to the experimental conditions, lab protocol, reagents, or tools used by a human inventor. None of this requires a disclosure of the vast, foundational training corpus itself which merely ‘enabled’ the invention at issue.
Though, arguably not all models can accurately describe or ‘explain’ how the invention came about. This ‘explainability’ is linked to the transparency of a model’s inferences, of which black-box models are most opaque. But this does not limit the patentability of AI inventions, as the enablement standard does not require a specification to ‘educate’ the reader on how an invention works. Moreover, it is a bedrock principle of patent law that an inventor ‘need not know or understand how or why an invention works’, all that is required is to explain how to make and use the invention. The standard is thus limited in its scope.
As Robinson argues, it is impossible to say for certain whether the enablement standard ipso facto and ipso jure forecloses patentability for AI inventions. He justifies his ‘wait and watch’ approach by noting the constant evolution of the enablement standard. For instance, the US Supreme Court in Amgen Inc v. Sanofi recently upheld a narrow reading of the enablement standard in holding that it was no longer required to describe how to make or use every single embodiment within a claimed class. Nor was a specification inadequate merely because it left the artist to engage in ‘some measure of adaption or testing’. Admittedly while these developments were not specifically made in the context of AI, or AI-enabled inventions, they are indicative of the fact that the standard is bound to further revisions in due course as AI-enabled inventions increase in volume. In such a paradigm, a black-box model is given considerable latitude to fulfil the enablement standard. And therefore, its limited capacity to explain the invention, or the possible method of working it are both non-fatal to the patentability of its invention.
Restructuring POSITA
Dini’s core concern undeniably has merit. But while the author raised a pertinent issue, his concerns are based on a rather simplistic view of AI. The POSITA standard requires restructuring to adapt to the (fast approaching!) era of AI inventors. Dini’s approach is incomplete, as it would be counterproductive to completely exclude AI models from the fold of inventions.
Ryan Abbott proposed a more sophisticated solution in his seminal paper ‘Everything is Obvious’. Abbott presages Dini’s claim that as inventive machines improve, the bar for patentability must proportionally evolve. However, he proposes an ‘inventive machine standard’ (IMS) as an alternative to POSITA where reproducibility is not an inherent disqualifier but acts as a mechanism to bring much needed objectivity to the non-obviousness standard. The proposed test is to determine whether a ‘standard machine’ could reproduce this subject matter with ‘sufficient ease’.
In our opinion, Abbott’s approach provides a potential brightline test for the non-obviousness of AI inventions. Firstly, his proposal is centred on standard machines; a realistic and achievable standard as compared to the perfectly matched ‘AI Supercomputer’ that Dini requires. Secondly, the ‘sufficient ease’ standard is perfectly placed to accommodate the vast diversity of AI architectures. We recommend that a general purpose LLM trained on generic prior art could serve as the ‘standard’ machine. Doing so would not only raise the standard of non-obviousness aligning itself with contemporary technological capabilities, but also ensure that inventions from highly advanced, non-standard models are not disqualified ab initio.
Conclusion
Apart from purely AI-enabled inventions, the discipline is equally grappling with the problem of inventions where human and AI seamlessly collaborate. In such situations, POSTIA presumably would often find such collaborative innovations non-obvious. This calls for substituting POSITA with the IMS standard as a much favourable mechanism for deciding the non-obviousness of an invention. The standard recognises that AI usage may be original through the mode of prompt engineering, and thus, if it is non-reproducible with sufficient ease, even such collaborative works may become patentable, assuming that the ‘personhood’ limitation currently imposed is, in due course, done away with, leading to the development of a more holistic understanding of POSITA itself as well in the evolving field of intellectual property law.