
Evolving Artificial Intelligence: Prevention or Facilitation of White-collar Crimes
While there are signals of a positive shift from passive observation to active prevention, there is still a need to closely monitor developments, have robust technical standard for AI auditability, promote mandatory AI literacy and judicial preparedness to handle AI enabled offences
Artificial intelligence (AI) has become a critical tool in the identification, detection, tracing, monitoring, and prevention of white-collar crimes in an increasingly complex and cross-jurisdictional corporate environment. Yet its growing adoption and various forms present a paradox: while AI enhances efficiency and capability, it can also be misused to facilitate, conceal and execute sophisticated forms of fraud, unethical conduct, and financial or operational manipulation.
On the enforcement side, AI has significantly improved fraud detection across sectors such as banking, digital payments and compliances. Global financial institutions have reported substantial reductions in fraud losses through real-time behavioural analysis, pattern recognition, and anomaly detection across large volumes of transactions. Indian regulators have similarly encouraged the deployment of advanced analytics and AI-driven surveillance to address the growing complexity of financial and corporate fraud.
Conversely, there are increasing instances where entities misuse AI to commit white-collar crimes in subtle yet high-impact ways that often evade traditional internal controls, external audits, ethical surveillance and regulatory scrutiny. AI-generated content can be deployed to impersonate individuals through fake voice commands, create forged documents, enable fraudulent authorisations, divert funds, or manipulate internal approval processes. It can also generate multiple layers of false or misleading information, enabling perpetrators to bypass, or confound traditional scrutiny and investigations.
AI tools are increasingly being used to generate false disclosures, audit reports, invoices, advertisements, discount schemes, and even regulatory filings, making misrepresentation harder to detect and trace. In the financial markets, AI-driven trading algorithms could potentially be structured to facilitate market manipulation while obscuring human intent behind automated decision-making. Entities could also exploit AI to conduct data scraping and surveillance of peers, competitors, employees, or customers potentially leading to unlawful data use, or privacy violations. Together, these practices illustrate how AI can transform traditional white-collar offences into more scalable and harder-to-trace forms of misconduct, posing significant challenges for regulators and enforcement agencies.
Recognising the growing misuse of AI-generated content, the Ministry of Electronics and Information Technology (MeitY) has recently amended the regulatory framework governing digital intermediaries under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These developments impose increased due-diligence obligations for intermediaries and tighten compliance, marking a shift in India’s regulatory approach to AI-enabled white-collar crime.
while AI enhances efficiency and capability, it can also be misused to facilitate, conceal and execute sophisticated forms of fraud, unethical conduct, biased decision-making and financial manipulation
The amendments introduce the concept of “Synthetically Generated Information” (SGI), which includes audio, visual and audio-visual content that is artificially or algorithmically generated in a manner that appears indistinguishable from real individuals or real-world events. The framework moves beyond passive disclosure towards active prevention by mandating detection measures, stricter takedown and grievance-redressal timelines, and implementation of labelling, metadata and provenance mechanisms.
Interestingly, text-only generative outputs, such as written responses, summaries or code generated through AI tools, may not fall within the scope of SGI unless they result in the creation of a false document or false electronic record. This directly respond to the growing use of AI in identity manipulation, market misinformation, and the creation of false electronic records, while still accommodating legitimate and good-faith uses of AI.
As part of the IndiaAI Mission, MeitY released the ‘India AI Governance Guidelines’. The guidelines outline 7 (seven) ethical principles to ensure the responsible development and deployment of AI technologies in India, emphasising the need for transparency in algorithmic decision-making and non-discriminatory outcome testing. Sector-specific initiatives are also being undertaken to prevent the misuse of AI. For instance, the National Health Authority (NHA) is exploring the use of AI to strengthen transparency and integrity within India’s digital health ecosystem, including the deployment of AI-based tools to detect fraud and irregularities in government health schemes.
The Reserve Bank of India (RBI) also constituted a Committee to develop a ‘Framework for Responsible and Ethical Enablement of Artificial Intelligence’ (FREE AI), to assess AI adoption in the financial sector, identify associated risks, and recommend safeguards for its responsible deployment. The Committee’s report emphasises balancing innovation with ethical and governance safeguards, aligning with broader AI policy initiatives, including NITI Aayog’s National Strategy for AI and India’s AI Mission. It highlights the need for capacity building and proposes mechanisms such as an AI innovation sandbox for regulated entities and FinTechs, while recommending that RBI’s regulatory framework may gradually incorporate obligations relating to transparency of AI systems, fairness testing, risk monitoring and enhanced governance for AI-driven decision-making.
The Securities and Exchange Board of India (SEBI) is also increasingly deploying AI and advance analytics to strengthen market regulation, enhance investor protection and modernise compliance oversight. By analysing vast volumes of structured and unstructured market data in real time, these tools enable earlier detection of market manipulation, insider trading and other irregularities. At the regulatory level, SEBI has issued circulars requiring regulated entities to disclose their use of AI and machine-learning tools, allowing the regulator to track adoption trends and assess compliance risks. Other Indian regulators such as TRAI and IRDAI have similarly encouraged the use of AI and machine learning tools to improve consumer services, risk monitoring and data protection.
Conclusion:
India’s evolving regulatory landscape provides provisions to prevent the misuse of technology and emerging digital tools. The challenge for lawmakers and regulators is to ensure that the pace of governance keeps pace with the technological development. While there are signals of a positive shift from passive observation to active prevention, there is still a need to closely monitor developments, establish robust technical standard for AI auditability, mandatory AI literacy and judicial preparedness to handle AI enabled offences. Finally, at the legislative level, there is an urgent need for a comprehensive sector agnostic AI policy that focuses not only on innovation but also on prevention, detection and identification of sophisticated crimes using AI.
Disclaimer – Views are personal