The advent of Artificial Intelligence (AI) conflagrated into multiple dimensions of human society with the intent to emulate human cognition such as learning and problem solving. It ingrained exponentially in administration, governance and policy-framing processes across the globe. Artificial Intelligence has emerged as an influential asset significantly affecting decisions and opportunities playing a critical role spanning from predictive policing to automated welfare eligibility systems and evidence -based analysis. These systems fasten, simplify and streamline human workload and assure an efficient flow of the work chain, even in legal systems they have potential of actively reducing backlogs and processing bulky document analysis in a short span of time, however, they spark profound questions regarding fairness, transparency and their methodology.
While being affirmative about the efficiency and problem-solving skills of AI, questions still find a valid stance behind the opacity of internal working and reasoning behind the outputs generated by AI. The system operates as “black boxes” where finding the mechanism and methodology behind the result generated by Artificial Intelligence (AI) systems becomes difficult and leads to skepticism about the outcome. In the legal sphere where rights and freedom of a person is subject to judgement, it potentially threatens transparency. Decisions such as granting or denying a loan or predicting criminal liability and risk, the human workforce behind it might not be able to understand how or why it arrived at a certain decision promoting bypass of due process protection which relies on a system that renders the decision pathways impervious to independent review.
From the constitutional lens, the inscrutability of the decision making highlights fundamental questions about whether such systems align and comply with the principle of “natural justice”.
Natural Justice, a principle that got its genesis from Roman’s “jus naturale” urges for fairness in legal proceedings. To ensure justice, it lays down key principles such as “Audi Alteram Partem” – hear the other side & no one should be adjudged unheard and “Nemo Judex in Causa Sua” – no one should be the judge in their own case ensuring impartial decision. These principles apply across administrative, legislative and judicial decisions to ensure fair, just and impartial justice delivery in India.
In India this principle finds an intersectionality with Article 14 (Right to equality) and fair procedure under Article 21 of the Constitution of India.
This article critically investigates whether decision making of Artificial Intelligence is compatible with the principle of natural justice and the fundamental values and rights embedded in the Constitution. Additionally it also delves into recognizing “Right to Explanation” in order to protect those constitutional values.
AI Decision-Making and Black Box Problem
Modern AI models process information on large datasets. They operate through layered mathematical computations that process input data to produce outputs. They form a network of linked nodes/neurons which is arranged in sequential layers which recognize features and patterns through links and activation mechanisms in order to ingest raw data and yield output as automated decisions or predictions. During training, models adjust billions of parameters by inducting weights and biases to minimize errors forming a complex system which creates “intuition” for patterns. Due to the intricacy of the architecture, the process becomes opaque fueling the debates on “right to explanation” in the legal sphere. The inability to understand the reasoning behind the outputs is a phenomenon called “the black box problem”.
In an administrative and legal sphere, authorities are required to provide the justification behind decisions and it is required to ensure transparency in decision-making which gives an insight to both the parties and allows them to challenge decisions promoting accountability of the bodies and system. These safeguards blur when an opaque algorithm is in charge of making decisions.
To understand it via illustration, if the AI model flags a person as “high-risk” for a loan reimbursement or in policing systems, the affected person might not have access to the logic behind that result and would have to incur the loss simultaneously. The obstruction to explainability creates a barrier for legal scrutiny as well as barring court, lawyers and officials from determining whether the result yielded passed the test of fairness and non-arbitrariness. This is central to our discussion which contends that this “opacity” is in contravention of the principles of natural justice.
Natural Justice and Right To Explanation
Forming the foundational basis of administration and legislation, the two doctrines of natural justices are:
Audi Alteram Partem: It advocates for the right to be heard before a decision is made which affects them.
Nemo judex in causa sua : No one should be the judge in their own case, ensuring impartial adjudication.
The principle of “Audi Alteram Partem” requires that both the parties shall be heard and states that individuals should be given the opportunity to present their case and respond to evidence against them. For this opportunity to be availed, the individual who is subject to the matter must understand the basis on which the decision has been made and for that they need to know how a decision has been made.
Indian courts have reiterated the ethos of the constitution and repeatedly emphasised the importance of fairness and transparency in decision-making. In the case of “Maneka Gandhi Vs Union of India”, the Supreme Court expanded the interpretation of Article 21 by stating Any procedure depriving a person liberty and curtailing his/her rights must be just, fair and reasonable.
Similarly, in the case of E.P. Royappa Vs State of Tamil Nadu, the court recognized that arbitrariness is incompatible with equality under Article 14. The court through P.N Bhagwati’s opinion reasoned that equality under Article 14 is not a mere an imaginary concept of uniformity rather it is a dynamic principle of reality antithetical to arbitrariness in actions of the state. In AI decision making, this principle poses a serious question about its impermeability that offends Article 14. “Black-box” systems that process data without discernible rationale deems to be violative of constitutional equality and justness urging a judicial action for algorithmic transparency.
If algorithm decisions possess the same complexity, can a decision be considered “fair” if the underlying logic behind its reasoning is incomprehensible ?
To address this gap, the concept of “Right to Explanation” is brought into play which attempts to address this issue by requiring that the adjudicative process provide understandable reasons for their outcomes. It refers to an individual’s right to receive a clear, comprehensible and meaningful account of how AI systems reach their decision, especially when it yields adverse effects on their rights , health and safety.
Constitutional dimension: Article 14 and 21
AI- driven judgment formation must be analyzed with the broader constitutional framework which influences state actions.
Article 14: Equality and Non-Arbitrariness
This article guarantees equality before the law and prohibits sanctions of the state. The supreme court has interpreted this provision from time to time that decisions must be based on rational and transparent grounds which are not just practiced by the court but also understood by the person in contravention with law.
If a system relies on pre-determined datasets or hidden parameters, inability to understand or access the inputs or basis on which the results have been opted may lead to a situation where even if it produces discriminatory data, it becomes difficult to impenetrate.
Without transparency or “explanation”, it becomes extremely complex to identify and narrow down such discrimination undermining the constitutional guarantee of equality.
Article 21: Fair Procedure
Article 21 protects the right to life and personal liberty and mandates that any deprivation in any fundamental rights shall follow with a fair and reasonable procedure.
In “Justice K.S Puttaswamy Vs Union of India”, the Supreme Court of India recognized privacy as a fundamental right under the ambit of Article 21 and emphasised the need to regulate emerging technologies that affect and influence personal autonomy and dignity.
A system which is making decisions regarding eligibility for benefits, employment, or criminal risk affects lives of people under that radar intimately. If the process lacks transparency or obligation to explain, it becomes a serious threat to the requirement of fair procedure.
Consequently, the “right to explanation” is a potential and efficient safeguard crucial to ensure compliance with Article 21.
Comparative Perspectives on “Right to Explanation”
On a global scale, policymakers and governing agencies have been acknowledging the urgency of addressing the issue posed by the algorithmic resolution mechanism and its erroneous complexities.
One of the laudable examples is “General Data Protection Regulation” and it inculcates provisions relating to data protection and decision-making. Under special circumstances, individuals have the right to access the relevant information about the processing and reasoning involved in the input and its inner workings manifesting the importance of transparency important for protecting the rights of individuals in the age of digitalisation.
India currently does not have any concrete legislation governing AI, its influence in justice and potential threats regarding accountability, however emerging policy frameworks continuously have been emphasising the importance of AI and its future implications.
Challenges in implementing the Right to Explanation
Right to being informed and explained deems to be fundamental in nature and crucial for fair ensuring fairness and justice however, the implementation remains a challenge.
Technical Complexity As stated in the article earlier, AI uses complex models which makes the task difficult for developers to decode and fully understand the results that are yielded.
Trade Secrets and Proprietary Algorithms Private companies treat algorithms as assets and mandating full transparency may conflict with intellectual property rights interest.
Risk of Superficial Explanation The spectrum of AI’s unpredictability does not only revolve in its complexity but also in its over simplification. Absence of access to methodology can produce a model/ solution which is oversimplified and ignores the nuances of the given problem set. For the right of Explanation to be meaningful, the processing must be clear, articulated, transparent and accurate.
Reimagining approaches
Requirement of Algorithmic transparency The criterias, data sets and input model should be disclosed by the concerned authorities.
Independent Audits There should be a mandatory set up of an independent body who scrutinizes periodically to detect biases, errors.
Human Oversight AI has not been rendered completely accurate and some errors are prone to take place and to deal with this there should be a human being allocated to oversight and review the result it has been providing.
Legal Recognition of Right to Explanation Sanctions on AI can contain its lack of accountability if “Right to Explanation” gets a legal recognition giving the citizens the ability to challenge.
Conclusion
Artificial Intelligence has proven to be one of the revolutionizing ideas of humans having the potential to ease and simplify problems. However, with immense power comes the shortfall of its lack of accountability, transparency and complexity. The constitution of India has inculcated the value of justice and equivocated how transparency and fairness in legal procedure are integral to it through landmark cases. Despite the ability of Artificial Intelligence to potentially ease the burden on courts and serve speedy decisions, questions wander around its opacity and complexity which can probably produce biased data and the subject to the decision might not have the access to understand the reasoning behind a certain output. This can be tackled with the concept of “Right to Explanation” with proper implementation which gives the right to seek an explanation for AI driven results to ensure transparency and fairness.
As AI technologies continue to evolve, it is essential that legal frameworks adapt accordingly. Ensuring transparency and accountability in automated decision-making is not merely a technical issue but a constitutional imperative necessary to preserve the rule of law in the digital era.
THIS ARTICLE IS WRITTEN BY ANUSHKA JHA FROM CHANAKYA NATIONAL LAW UNIVERSITY.