[Mustafa Rajkotwala works on AI, Strategy and Legal Engineering at NYAI. He is a commercial and technology lawyer based in Mumbai, India.]
Artificial Intelligence (AI) is no longer confined to operational functions. It is increasingly deployed in Indian corporate boardrooms to assess risk, monitor compliance, process financial data, and shape strategic decision-making, enabling companies to synthesise information at a scale and speed previously unattainable. While these systems promise efficiency, predictive insight, and enhanced monitoring, they raise a fundamental legal question: if corporate decision-making is increasingly mediated through algorithmic systems, does the existing framework adequately preserve fiduciary accountability?
The Companies Act, 2013 rests on a clear institutional premise: directors are natural persons who exercise judgment and bear responsibility. Section 2(34), read with sections 149, 152 and 153, contemplates directors as individuals possessing legal identity, while section 166 imposes duties of good faith, care, diligence, and independent judgment. AI satisfies none of these requirements. It has no legal personality, agency, or capacity to bear fiduciary responsibility.
The issue, therefore, is not whether AI can become a director, but whether its growing role in shaping board decisions risks altering how judgment itself is exercised. The primary concern is not the delegation of functions, but the gradual delegation of judgment.
AI in Corporate Governance is No Longer Merely Assistive
The legal problem becomes clearer once the function of AI in governance is identified. Boards have always relied on external inputs, including consultants, auditors, valuers, and lawyers. In that sense, AI is another form of external input. The difference is that AI does not merely supply information. It can structure how decisions are reached.
A useful distinction may be drawn between three forms of AI deployment: (a) assistive AI, which provides data analysis, alerts, and forecasts while leaving decision-making to humans; (b) agentic or augmented AI, which prioritises risks, narrows options, generates recommendations, and may trigger pre-authorised actions; and (c) autonomous AI, which would operate without meaningful human involvement.
Indian corporate law can accommodate the first category, as boards may rely on analytical tools just as they rely on expert advice. The third category remains legally untenable, as the statutory framework of directorship is tied to natural persons. The difficulty lies in the second category, where AI begins to structure decisions rather than inform them, blurring the line between assistance and substitution.
This distinction matters because fiduciary law is concerned with process, not merely outcomes. A board that uses AI to detect compliance anomalies differs from one that allows AI-generated recommendations to shape how strategic alternatives are evaluated. The latter raises a direct question whether directors are still exercising judgment in the manner the law requires.
Judgment, Care, and the Limits of Algorithmic Reliance
The Indian corporate framework requires directors to deliberate, supervise, and exercise judgment. Section 166 codifies these obligations, requiring directors to act in good faith, exercise due care, skill and diligence, and exercise independent judgment. Section 2(34), read with sections 149, 152 and 153, reinforces that these duties attach to natural persons.
Within this unified category, the statute imposes differentiated roles. Independent directors under section 149(6) are subject to the same duties, but with heightened expectations of independence and oversight. They safeguard minority interests, evaluate related party transactions under section 188, and oversee audit and risk processes through committees under section 177.
Indian courts have consistently affirmed this understanding. In Official Liquidator v. P.A. Tendolkar, it was held that directors cannot evade responsibility by pleading ignorance where they have failed to supervise company affairs. In Vaishnav Shorilal Puri v. Kishore Kundan Sippy, the duty of loyalty requires directors to prioritise the company’s interests. In Nanalal Zaver v. Bombay Life Assurance Co., directorial powers must be exercised for proper purposes. In Dale & Carrington Investment v. P.K. Prathapan, honesty and good faith were emphasised. These decisions establish that directorship is grounded in active judgment and accountability.
While AI may improve access to information, it introduces a structural risk. Where outputs are treated as reliable without scrutiny, oversight becomes more data-rich but less judgment-intensive. Directors may rely on the systems they are expected to interrogate. The expectation of independence does not diminish. It becomes more demanding.
Corporate law permits delegation of functions, not delegation of judgment. Boards may delegate execution, investigation, or data processing, but not the final evaluative role. Section 179 permits delegation, but ultimate responsibility remains with the board. AI places pressure on this boundary. Where algorithmic outputs are treated as decisive rather than provisional, boards may effectively delegate judgment without formally doing so. The question is not whether a decision is formally approved, but whether directors exercised independent judgment in arriving at it.
The difficulty extends to the duty of care, which requires directors to act on an informed basis. Where reliance is placed on systems that are opaque or not capable of meaningful interrogation, the informational foundation may be wide but shallow. Directors may have access to more data, but less understanding.
Comparative jurisprudence illustrates the standard expected. In Smith v Van Gorkom, the Delaware Supreme Court held directors liable for approving a transaction without adequately informing themselves. The decision affirms that boards must act on an informed basis. The business judgment rule protects such decisions, but only where they reflect good faith and informed judgment. The principle is consistent with Indian law, which does not protect passive or uncritical reliance.
AI complicates this requirement. Where directors cannot explain why an output was relied upon or how its limitations were assessed, the decision is difficult to characterise as informed. This raises the standard of care expected of those who rely on such systems. Directors may therefore face liability as “officers in default” under section 2(60), and in serious cases under sections 447 to 449. The use of AI does not dilute these obligations.
AI-driven harm may also be difficult to predict or attribute where outcomes arise from the interaction of data, models, and organisational use. Corporate liability alone may be insufficient. Requiring companies to maintain adequate insurance in decision-critical contexts would ensure compensation while creating incentives for stronger oversight and controls. This complements, but does not replace, fiduciary responsibility.
Oversight Duties Now Extend to the Systems that Shape Decisions
While Indian corporate law does not articulate a standalone doctrine equivalent to In re Caremark International Inc. Derivative Litigation, the underlying principle of system-level responsibility is consistent with the statutory framework. Under section 166, directors must exercise due care and diligence, which includes ensuring adequate internal systems for monitoring risk, compliance, and financial integrity. This is reflected in the structure of the Act, particularly section 177, which mandates audit and risk oversight through board committees. Directors are responsible not only for decisions, but for the systems through which information reaches them.
Comparative jurisprudence illustrates this principle. In In re Caremark, directors were required to ensure adequate information and reporting systems. This was refined in Stone v Ritter, which recognised that sustained failure to monitor such systems may amount to a breach of fiduciary duty. The principle reflects a broader obligation of system-level oversight aligned with the Indian statutory position.
As AI systems become integrated into compliance monitoring, risk management, contract review, and strategic modelling, they form part of the governance infrastructure that boards must oversee. The duty does not require directors to become technologists. It requires them to interrogate the systems on which they rely, including their reliability, data inputs, testing, challenge mechanisms, audit trails, and safeguards against bias or misuse.
This obligation is further reinforced by India’s data governance framework. The Digital Personal Data Protection Act, 2023 introduces the concept of data fiduciaries, placing responsibility on entities for lawful and fair processing of data. Where AI systems operate using personal or sensitive data, boards must ensure compliance with both corporate law obligations and data protection standards. Oversight therefore extends to both the design and deployment of such systems.
AI does not reduce oversight obligations. It extends them. Once governance becomes system-dependent, oversight becomes system-oriented.
Comparative Regulatory Developments
Comparative developments show a consistent pattern. AI may be integrated into governance, but accountability remains human.
Under the United Kingdom’s Companies Act 2006, directors’ duties are codified under sections 171 to 177, including the duty under section 172 to promote the success of the company. In Regal (Hastings) Ltd v Gulliver, directors were held liable for profiting from their position even without bad faith, reinforcing the strictness of fiduciary obligations.
The European Union (EU) adopts an ex ante regulatory model through the EU AI Act. Systems classified as “high-risk” are subject to structured obligations of risk management, transparency, and human oversight. The framework does not prohibit AI in governance, but ensures that its deployment remains supervised and auditable.
Singapore has moved further in operationalising board-level AI governance. The Monetary Authority of Singapore (“MAS”) has issued proposed Guidelines on Artificial Intelligence Risk Management in 2025, placing responsibility for AI oversight on boards and senior management. These require institutions to maintain an inventory of AI systems, conduct materiality assessments based on impact, complexity, and reliance, and implement lifecycle controls covering data governance, explainability, testing, monitoring, and change management.
The MAS framework treats AI not as a technical tool but as a governance risk. Boards are expected to integrate AI risks into enterprise risk management, ensure human oversight in decision-making, and adopt proportionate controls based on the significance of AI usage. This approach is reinforced by Singapore’s evolving Model AI Governance Framework and the 2026 updates addressing agentic AI and generative AI systems, which emphasise explainability, traceability, and calibrated human involvement.
These approaches reinforce that technological integration does not displace fiduciary responsibility, but extends it to the systems through which decisions are shaped. This is consistent with Indian law under sections 166 and 177.
The Appropriate Indian Corporate Law Response
The challenge is not one of rewriting corporate law from first principles. The necessary legal tools already exist. The issue is one of application and clarification.
Indian corporate governance would benefit from clearer articulation that reliance on AI does not dilute duties under section 166; independent judgment applies where algorithmic systems influence decisions; auditability must be preserved; oversight extends to algorithmic systems; and independent directors must interrogate AI-assisted outputs.
A formal statutory amendment may not be necessary. A calibrated soft law approach could suffice. Securities and Exchange Board of India (SEBI) and the Ministry of Corporate Affairs (MCA) may issue principles-based guidance clarifying that AI systems fall within board oversight under sections 166 and 177, supported by a “comply or explain” disclosure framework. The modalities would focus on process. This includes maintaining an inventory of material AI systems, documenting decision trails where outputs materially influence decisions, and ensuring periodic audit committee review of system reliability, limitations, and risks. Industry bodies may complement this through sector-specific codes and model governance standards. These are not new principles, but existing ones applied to a new environment.
Conclusion
The increasing use of AI in corporate boardrooms does not expose a gap in Indian company law. The Companies Act, 2013 continues to rest governance in accountable human actors through sections 149 and 166. The challenge lies in ensuring that these duties retain substantive meaning as decision-making becomes increasingly mediated through algorithmic systems. The legal question is therefore one of application, not recognition.
Section 166 requires directors to exercise independent judgment and due care. These obligations cannot be discharged through uncritical reliance on AI-generated outputs. Where systems shape how decisions are framed and evaluated, directors must interrogate their assumptions, limitations, and reliability. Indian jurisprudence has consistently rejected passive directorship, and that principle applies equally to technological reliance. AI does not dilute fiduciary duties. It sharpens how they must be performed.
The appropriate regulatory response lies in clarification rather than reconstruction. Guidance from the MCA and SEBI can assist in setting expectations around explainability, auditability, and board-level oversight of AI systems. Risk allocation mechanisms, including insurance, may complement this framework without displacing fiduciary responsibility. Corporate governance under Indian law remains anchored in human judgment. AI may inform that process, but it cannot substitute the accountability the law demands of directors.
– Mustafa Rajkotwala