Bench and Bot – The Kerala HC’s AI Guidelines and the Bigger Judicial Puzzle

On the Kerala High Court publishing its “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary“, Shailraj Jhalnia discusses the judicial use of AI in different jurisdictions and the trend of Courts turning to use AI tools. He also discusses the asymmetry in having the guidelines apply only to district courts, and not appellate courts. Shailraj is a third year law student pursuing B.A. LL.B. from National Law School of India University, Bangalore, with a keen interest in IP Law, Arbitration and Criminal Law. His previous posts can be accessed here and here.

Image from here.

Bench and Bot – The Kerala HC’s AI Guidelines and the Bigger Judicial Puzzle

By Shailraj Jhalnia

The use of artificial intelligence in the judiciary is no longer a futuristic vision but an existing reality, with courts across the globe employing AI in different roles. As is evident across the globe, there are a large number of jurisdictions gradually inducting AI as an administrative or case-management aid under strict human oversight (discussed below). But there is a more sophisticated and contentious trend unfolding not only in India but globally as delved later in this post, where the use of AI is not always relegated to backstage support. There have been cases of judges going to generative AI for substantive legal work, such as the very essence of judicial work: writing judgments. The well-known case of the Punjab and Haryana High court using ChatGPT in deciding a bail plea is evidence of this ad-hoc experimentation, causing heated controversy concerning accountability and the character of judicial reasoning.

It is against this backdrop of discretionary, high-stakes use that the July 2025 Kerala High Court “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary“ should be understood. It is a vital first step to officially govern a practice that has been developing in an informal way. At its essence, the policy establishes a firm red line in 4.6 – AI software is categorically prohibited from being used “to arrive at any findings, reliefs, order or judgment.” It is defined as purely “assistive” software, and the onus of the “content and integrity of the judicial order” remains entirely upon the judge. The policy establishes a fundamental dichotomy between broad AI tools and “Approved AI Tools,” in 3.4 which have been officially screened by the High Court or Supreme Court. It cautions that widely used generative AI such as ChatGPT threaten confidentiality and prohibit their adoption for judicial use, with exceptions for the approved ones.

How Are Other Jurisdictions Approaching Judicial Use of AI?

A global survey of judicial AI finds a common “human-in-command” ethos driving its implementation, although the approach is specific to each country. In Brazil, the VICTOR system speeds up filtering of cases through scanning appeals for the “general repercussion” criterion, reducing review times from 44 minutes to 5 seconds. Most importantly, human judges are still the final decision-makers.

Equally, Colombia and Argentina’s Prometea system produces opinions and sorts constitutional petitions. In a pilot, it screened urgent health cases in less than two minutes an activity that would take a clerk 96 days. Even though it is both efficient and 96% accurate on other tasks, its work is clearly ancillary and ever subject to review by the court.

The U.S. strategy is more conservative, emphasizing lawyer ethics and rules of evidence over the use of judicial AI tools. Courts caution lawyers to screen AI outputs, and a proposed federal regulation would treat AI-evidenced data as expert testimony, requiring demonstrated reliability and transparency.

The EU and UK formalize this ethos into policy. The EU’s AI Act requires “effective human oversight” of high-risk systems, and UK institutions emphasize that AI cannot replace human judgment. The common international trend is that the speed of AI is accepted, but only subject to non-negotiable principles of transparency, audibility, and non-delegation of last word.

Despite the “human-in-the-loop” model working in other jurisdictions, it hatches a long term risk – deteriorating cognitive critical thinking essential for judiciary. As evidenced by a research where overtly relying over AI usage causes cognitive offloading, which is an essential element for the judges in judiciary. This model might be initially scrupulous but over time such convenience will increase this subtle dependency. Here “review” would become mere ratification rather than actual rigorous application of judicial mind. Such gradual atrophy of critical thinking abilities of judiciary is precisely why depending on AI essential judicial functions is risky. The human may remain in the loop, but an acknowledgment of detrimental effects of overtly relying on AI is a must, for instance as said above about EU’s AI Act’s underlying intent. 

Why Are Courts Turning to AI Tools?

India’s justice system is beset by a  of tens of millions of pending cases, and judges face crushing workloads. AI promises efficiency gains (speech-to-text transcription, faster research) and better access to multilingual justice via machine translation. Kerala’s policy itself notes that approved AI tools may be used solely as assistive tools for “administrative assistance, legal research, transcription, translation or summarisation of case records”. In practice, Courts have piloted such uses; for instance,  the Supreme Court’s SUVAS system translates judgments into nine languages, while SUPACE aids judges’ legal research to “reduce pendency delays”. 

Yet AI use also entails well-known risks. Large language models can “hallucinate” facts or authorities, giving a false veneer of legal reasoning. The U.S. Mata v Avianca case starkly illustrates this danger: attorneys were sanctioned for filing fictitious case citations generated by ChatGPT. Judge Castel noted that while using “a reliable AI tool” is not inherently wrong, lawyers have a gatekeeping duty to verify its output. Similarly, Courts must avoid conflating speed with correctness; unchecked AI could accelerate injustice. The Kerala guidelines candidly warn that AI’s “indiscriminate use” may breach privacy, introduce data-security risks, and erode trust in decisions.

That said, limited and transparent AI uses can offer genuine benefits. Assisted translation of orders and anonymisation of records, for instance, can improve access and privacy without prejudicing outcomes. The key is human-in-the-loop oversight. Kerala’s guidelines, for example, in 4.8 mandate detailed audit logs and human verification of any AI output. Judges must “meticulously check any citations or translations” from AI. This aligns with best practices in the US and EU, where any deployed AI system should have reliable, interpretable methods and allow human override. In short, AI tools can help tame backlogs and multilingual needs, but only as advisors, not substitute judges.

Why Only the District Judiciary?

Kerala High Court’s guidelines are stringent, but curiously limited in scope. It binds “all members of the District Judiciary in Kerala and the employees assisting them” from trial judges to interns, but says nothing about the Kerala High Court itself. This creates an asymmetry that district Courts may only use “approved AI tools” under close supervision, while High court seems exempt. Yet we know judges at those levels have been experimenting with AI. Apart from SUVAS and SUPACE, there are reports of other HC benches consulting chatbots or the SC generating summaries of pleadings. In other words, it sets up a double standard: curbs on trial Courts, but silence on appellate Courts.

By restricting its application to the District Judiciary, it introduces an accountability asymmetry, under which lower court judges are under strictures while High Court judges are apparently at liberty to keep on experimenting. Suppose a district judge painstakingly avoids AI for legal research, while a High Court judge (without published rules) casually relies on a large model. If the latter misfires, the fallout is much larger. Worse, litigants might wonder why the magistrate is treated like a lab rat while the appeals judge does not face the same constraints. Such asymmetry can undermine faith in the system’s fairness.

Moreover, restricting AI only to one level misses the systemic nature of the issue. If anything, any real regulatory scheme should have tier-agnostic, uniform rules for all Courts. Ideally, there would be national standards (or even legislation) requiring disclosure of AI use, audits of AI outputs, and independent review of critical applications. The Kerala policy hints at these norms by requiring audit logs and training programs, but without extension to higher Courts, accountability remains patchy. Uniform guidelines, perhaps enacted by Parliament or framed by the Supreme Court and judicial councils, would prevent confusion and ensure that all judges, from Chief Justice Magistrates to the Chief Justice, are equally bound by “transparency, fairness, accountability and confidentiality” demands.

Conclusion

The Kerala High Court’s new AI policy puts concrete guardrails around judicial AI use and signals that Courts are taking this technology seriously. 

In the future, India should aim for a unified framework for AI in Courts. The Supreme Court and legislature (or an empowered committee) should consider rules requiring disclosure whenever AI “assists” a judgment, mandatory logs of AI use, and periodic audits of AI outputs for fairness and accuracy. Nationalised “approved tools” (perhaps developed in consultation with tech experts) could minimise the risks of rogue usage. Ethics guidelines for judges and lawyers aligned with global norms would further standardise practice. In particular, the “tier-agnostic” ideal must prevail in which AI in a trial Court should be subject to the same principles as AI in a High Court or the Supreme Court.

Ultimately, judicial legitimacy rests on human judgment that is transparent, reasoned, and accountable. AI can be a potent aide in trimming backlogs and supporting legal work, but it cannot be the “brain” of justice. As Kerala’s policy shrewdly insists, the human judicial mind must remain front and centre. Only by keeping judges “in the loop” and subjecting technology to rigorous public-sphere scrutiny can we ensure that innovation strengthens rather than subverts the rule of law.

Read More