$ cat ai-legal-hallucinations.mdx

Lawyers made up court cases using AI

Apr 30, 2026 · #ai #chatgpt #law #hallucinations #sullivan-cromwell #risk

Lawyers made up court cases using AI

One of the largest US law firms - Sullivan & Cromwell - submitted a court document with fabricated precedents. They didn’t make them up themselves. AI did it for them. The judge checked. Explosion.

AI hallucination legal document Sullivan Cromwell


If anyone still has the illusion that AI “is good enough now”, this news can end that illusion.

Sullivan & Cromwell - one of the oldest, most expensive, and most prestigious law firms in New York. Their lawyers come from Harvard, Yale, Columbia. Billing rate - $1,500 per hour. Fifteen hundred. Per hour. Per one lawyer.

This firm submitted a court document with precedents that don’t exist. Invented by AI. In 2026. After five years of discussion about “AI hallucinations” in the industry.


What actually happened

A young lawyer - a few years out of law school, working in a junior position - was given the task of preparing a court document. He needed to find legal precedents matching a specific case.

Instead of spending ten hours in Westlaw or LexisNexis (professional legal databases), he used ChatGPT. He asked AI to find precedents. AI “found” them instantly - with full citations, case names, docket numbers, ruling excerpts.

Everything looked credible. The problem: none of those precedents existed. AI simply generated text that sounded like real precedents.

The document went to court. The judge - routinely - decided to verify the cited precedents. Turned out not one of them existed. The judge sanctioned the firm. Information went public.


Why AI “makes things up”

Language models like ChatGPT, Claude, Gemini aren’t fact databases. They’re text probability models. They learn which words usually follow which, and generate text that “sounds like text on a given topic”.

If we train AI on millions of legal documents, AI learns the form - what a precedent citation looks like, what the docket format is, what typical phrasings are. But it doesn’t learn which specific precedents exist. If you ask “give me a precedent for case X” - AI will generate something that looks like a precedent. Not something that is a precedent.

This is known as “AI hallucination”. A problem known since 2022. All AI producers warn. All professional users - are trained. And yet - the largest NY firm got burned.


This isn’t an isolated case

Since 2023 there have been dozens of similar cases - lawyers from smaller firms submitting documents with AI hallucinations. Every time: judge checked, it blew up, fine.

But the difference is that Sullivan & Cromwell is top-5 firms in the US. These people have procedures. Regulations. QA. Internal checks. And still - AI hallucinated, and procedures didn’t catch it.

So the problem isn’t “new lawyer didn’t know” anymore. The problem is “the entire industry can’t safely integrate AI”.


My take

In my opinion, this is just the beginning.

In a year: first doctor will diagnose based on AI. Invented medications. Real consequences. Likely patient death or serious complications.

In two years: first structural engineer will calculate load capacity based on AI. Invented formulas. Building collapses.

In three years: first politician in a campaign will cite a “study” invented by AI. Wins election. Conducts policy based on hallucinations.

AI hallucinates, and people trust it. Because it sounds intelligent. Because it’s fast. Because it’s cheap.

My advice: use AI as an assistant, not an authority. Verify every claim. If AI tells you X, check if X exists. Always. Even if it sounds intelligent.


Sources

$ cd ../