Agentic AI Practical Approaches to Liability under Indian Law

Agentic AI Practical Approaches to Liability under Indian Law

The ability to delegate tasks to a system that acts independently and effectively is a significant benefit of agentic AI. However, it can also give rise to liability scenarios

Understanding Agentic AI

AI agents are systems that can autonomously perform sophisticated, iterative tasks based on user prompts, by ‘perceiving’, ‘reasoning’ and then acting, without step-by-step instructions.1

Consider this illustration. A user could sync their email account and calendar with an AI agent, and prompt it to arrange meetings within a given timeframe. In doing so, the AI agent would act with broad discretion. This contrasts with technical integrations or automations2 which have limited, structured operations, without exercise of discretion or autonomy.

The ability to delegate tasks to a system that acts independently and effectively is a significant benefit of agentic AI. However, it can also give rise to liability scenarios, as we explore below.

Case Studies: Unintended Autonomy v. Malicious Intervention

Company A develops an AI agent that can review certain parameters whenever a user visits a website, assess the efficacy of related online ad campaigns, and optimise budget allocation. Company B deploys this AI agent. The AI agent determines that none of Company B’s ad campaigns are effective, and cancels all of them. By the time Company B realises this, it has lost users and traction. In this scenario, the AI agent has potentially functioned in an unexpected manner, but arguably within the scope of its discretion.

Contrast this with a different scenario. Company C develops an AI agent that acts as a personal shopper. In this case, an employee of Company C tampers with the AI agent maliciously, such that in addition to legitimate orders, the AI agent places and cancels some orders at random intervals, routing refunds to the employee’s bank account. Company C’s customers suffer losses, and Company C suffers a loss of reputation and business besides being implicated in criminal investigations. Here, the liability arises from the AI agent’s acts, but was caused by a malicious interloper.

Can AI Agents be Liable?

Can liability be imputed to an AI agent?

The traditional understanding of agents is derived from the Indian Contract Act, 1872 (the “Contract Act”).3 As currently framed, AI agents cannot be agents under the Contract Act, since agents must be majors of sound mind.4 Further, legal frameworks tend to impute liability only to natural or legal persons.

By this reasoning, an AI agent cannot itself face liability. Courts are instead likely to impose liability on the natural or legal persons involved: the developers and/or deployers of the AI agent. This in turn precipitates the need to address inter se liability between these actors – which will predominantly depend on the contractual arrangements between the parties.

That said, other laws may also apply, depending on the factual scenario:

  • If there are defects in the development of the AI agent, deficiencies in service or misrepresentations, product liability laws such as the Consumer Protection Act, 2019 (“CPA”), may be attracted;5
  • Claims under tort may apply, based on the expected standard of care, negligence in satisfying the same, etc.;
  • Laws such as the Information Technology Act, 2000 (the “IT Act”) may apply, given that AI agents are, at their heart, computer-based systems;6 and
  • Where the AI agent processes personal data, the Digital Personal Data Protection Act, 2023 and rules thereunder would also be attracted.

Practical Mitigation Steps

Given the extent of access to knowledge and discretion of AI agents – and that they may well act while their human instructors sleep – it is essential to have as many safeguards as possible, when developing or deploying agentic AI.

In the absence of an overarching law that addresses agentic AI liability, however, the first priority would be to understand the respective roles, responsibilities and positions of the persons developing and deploying the AI agent.

For businesses that develop and deploy AI agents, the following mitigation measures are advisable:

  • ensuring sufficient ‘anti-jailbreak’ measures;
  • implementing guardrails on how the AI agent may act;
  • performing adequate real-world testing, prior to deployment; and
  • including human alert mechanisms.

For deployers, the following mitigation measures are advisable:

  • performing due diligence on the person/entity developing the AI agent, and the AI agent, prior to deployment;
  • ensuring that appropriate guardrails have been implemented in the development of AI agents;
  • carrying out pilot tests prior to full-scale deployment; and
  • ensuring clarity in instructing AI agents, with frequent monitoring and calibration based on performance.

In both cases, responsibilities, obligations and respective liabilities must be discussed, negotiated and recorded in contractual arrangements. While it may not be possible to predict every form of liability, addressing broader categories and carving out certain exceptions – much like in other technology contracts – will help.

Ultimately, AI agents present an ongoing shift in technology, intended to assist humans. By predicting possible liability scenarios and addressing them through preventive measures and contractual safeguards, sufficient mitigation can be introduced for businesses to develop and deploy them more confidently.

Disclaimer – The information provided in this document is solely for general interest and information and is not intended to constitute legal advice and therefore should not be relied upon in any manner. The sending/sharing of this document does not create an attorney-client relationship between Poovayya & Co. and the recipient. For more specific comprehensive and up-to-date information or for legal advice and assistance, you should seek the opinion of legal counsel. Reproduction, distribution and/or republication of this document or the content of this document is prohibited unless you have obtained prior written permission from Poovayya & Co.

1. For a better understanding of agentic AI, see https://blogs.nvidia.com/blog/what-is-agentic-ai/, last accessed March 3, 2026, and
https://mitsloan.mit.edu/ideas-made-to-matter/agentic-ai-explained,
last accessed March 3, 2026. In essence, AI agents promise human-like capabilities without the attendant needs.
2. For example, most email account providers now also provide a synced calendar; however, there is no automation in writing emails or in setting up meetings, simply by virtue of this integration.
3. An agent under the Contract Act is a “person employed to do any act for another, or to represent another in dealings with third persons”. See Section 182, Contract Act. Broadly, where an agent represents a principal, the agent’s acts bind the principal. Further, the Contract Act provides that agents may be indemnified against the consequences of lawful acts, and acts done in good faith, by such agents. There are exceptions, however, where the agent is engaged to carry out a criminal act. See Sections 222, 223 and 224 of the Contract Act.
4. The term “major” as used here means a person of the age of eighteen (18) years or more.
5. See for instance, Section 84 of the CPA setting out liability for product manufacturers, including where there are design defects, deviations from manufacturing specifications, or failure to conform to express warranties. Deficiencies in service can also be claimed against, under the CPA, under Section 85 of the CPA.
6. The IT Act includes penalties on individuals who introduce a ‘computer contaminant’ into any computer system, causes damage or disruption to a computer, etc. The IT Act also includes penalties and liabilities associated with failure to protect computer systems, unauthorised access to and disclosure of confidential information, some of which, when done dishonestly or fraudulently, also constitute criminal offences. Further, rules established under the IT Act, such as the Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021 (the “Intermediaries Guidelines”) may also apply, depending on the nature and industry of deployment of the AI agent. The Intermediaries Guidelines set out requirements for intermediaries such as social media platforms, and impose due diligence, content takedown and grievance redressal requirements, amongst others.

Read More