By Team NYAI |

AI Hallucinations in Indian Courts: What Every Lawyer Must Know

AI Hallucinations in Indian Courts: What Every Lawyer Must Know

India's courts are no longer raising concerns about AI-generated fake citations. They are imposing costs, staying orders, and using the word misconduct.

That shift in language matters. It marks the boundary between a profession still deliberating about AI and one now accountable for how it uses it.

The Record in Indian Courts

The incidents are documented and recent.

In August 2025, an additional junior civil judge in Vijayawada dismissed objections in a property dispute, citing four Supreme Court judgments in support. When the matter was challenged before the Andhra Pradesh High Court, all four judgments were found to be AI-generated. None existed. The High Court accepted the judge's explanation of good faith. The Hon'ble Supreme Court did not. In Gummadi Usha Rani & Anr. v. Sure Mallikarjuna Rao & Anr., SLP(C) No. 7575/2026, a bench of Justices P.S. Narasimha and Alok Aradhe stayed the proceedings and declared plainly: reliance on non-existent, AI-generated judgments "is not an error in decision making." It is "misconduct." Legal consequences, the Court warned, shall follow.

This was not an isolated case.

On February 13, 2026, the Hon'ble Supreme Court dismissed a Special Leave Petition after the petitioner was found to have cited non-existent judgments, drafted from online articles without verification of the original orders. On February 17, 2026, Chief Justice of India Surya Kant termed the practice of using AI to draft petitions "absolutely uncalled for" and noted that a case called Mercy v. Mankind - cited before a bench of the Court - had never existed.

The Bombay High Court imposed a cost of ₹50,000 in January 2026 on a litigant for citing a fabricated case in written submissions, noting "give-away features" of AI generation in the filing. In October 2025, another Bombay High Court bench quashed an Income Tax assessment order that had added over ₹22 crore to a company's income on the basis of three judicial decisions that were entirely non-existent. Before the Income Tax Appellate Tribunal, in Buckeye Trust v. PCIT-1 Bangalore, a tax ruling was retracted after fictitious case laws were discovered in the record. In September 2025, a petitioner before the Delhi High Court withdrew their plea after the opposing side demonstrated that the cited precedents did not exist.

These are not cases of poor legal argument. They are cases of fabricated authority - submitted to courts, relied upon by decision-makers, and now consequential for the lawyers who filed them.

Why AI Generates Citations That Do Not Exist

A generative AI tool is not a legal database. It is a predictive language engine trained to produce text that is statistically probable given the prompt it receives. When asked for a case citation, it assembles the components of what a citation looks like - party names, year, volume, reporter - in a format that appears authoritative.

In the Vijayawada matter, one of the fabricated citations was Subramani v. M. Natarajan (2013) 14 SCC 95. Structurally, it is indistinguishable from a real Supreme Court citation. Legally, it does not exist.

This is not a malfunction. It is how these systems are designed to work. OpenAI's own System Card for its o3 and o4-mini models, published in April 2025, reported hallucination rates of 33% and 48% respectively in factual Q&A benchmarks. Academic research published in the Journal of Legal Analysis in 2024 found that general-purpose large language models hallucinated between 58% and 88% of the time when asked specific, verifiable questions about real court cases.

The legal profession cannot afford those odds.

What the Institutions Are Signalling

The institutional response in India is now structured, and it is escalating.

In November 2025, the Supreme Court's Centre for Research and Planning released a White Paper on Artificial Intelligence and the Judiciary, identifying citation fabrication and hallucination as primary risks. It directed that all AI-generated outputs - precedent lists, summaries, briefs - be independently verified before reliance. In July 2025, the Kerala High Court became the first High Court in the country to issue a formal AI policy for its district judiciary, explicitly stating that violations, including reliance on unverified AI outputs, may result in disciplinary action.

On April 4, 2026, the Gujarat High Court issued its own AI policy barring judges and court staff from using AI for any judicial decision-making, reasoning, or order drafting. The policy states without ambiguity: the use of AI "does not constitute a defence to a finding of error, misconduct, or professional negligence."

The Bar Council of India has issued no formal advisory on the subject. As the Supreme Court's notices to the Attorney General, Solicitor General, and the Bar Council of India in Gummadi Usha Rani make clear, that regulatory vacuum is itself now a matter before the highest court in the land.

What Every Lawyer Must Do

The courts have settled the first question. The duty to verify is not delegable. What remains is whether that duty is discharged in practice.

The obligation to verify is personal. No citation generated by a general-purpose AI tool should enter a pleading, submission, or written argument without cross-verification against SCC Online, Manupatra, or the primary court records. The Advocates Act, 1961 and the Bar Council of India Rules on professional conduct impose a duty of accuracy on the advocate who signs the document - not on the tool that assisted in drafting it.

The source of the tool's training matters. A general-purpose large language model is trained on internet data. Indian law - its primary judgments, High Court orders, statutory notifications, and tribunal rulings - is not reliably or comprehensively represented in that data. Lawyers and in-house teams must ask, before using any AI tool for legal research: what is this system trained on, and can its outputs be traced to a verifiable primary source?

Verification must become a workflow, not a habit. For law firms and enterprise legal teams, individual caution is not a governance standard. Firms need a documented protocol: which tools are permitted for which tasks, what the sign-off requirement is before an AI-assisted work product is filed, and where accountability sits when it is not.

The absence of a BCI advisory does not suspend the professional standard. The Hon'ble Supreme Court has established it. Every AI-assisted filing is the responsibility of the advocate whose name it carries.

AI will not displace the lawyer's duty to the court. What it has done is expose the consequence of treating that duty as something that can be outsourced.

The question before the profession is not whether lawyers will use AI. They already do. The question is whether the lawyer remains the final authority on what it produces - or whether that position, quietly and without intent, has been ceded to a machine that cannot distinguish a real judgment from a plausible-sounding fabrication.

In law, the answer to that question has never changed. The mind that signs the pleading is the mind that owns it.



Share on

Stop Struggling.

Start Leading with NYAI.