America’s largest personal injury law firm became the latest legal powerhouse caught filing court documents riddled with fake case citations manufactured by artificial intelligence, exposing a troubling pattern threatening the integrity of the justice system itself.
Story Snapshot
- Morgan & Morgan attorneys fined $5,000 in 2025 Wyoming case for citing eight non-existent AI-generated legal precedents
- Thomson Reuters study identified 22 cases containing fabricated citations in just five weeks during summer 2025
- Hundreds of AI hallucination incidents now tracked across U.S. courts since landmark 2023 ChatGPT lawyer scandal
- Legal AI tools hallucinate fake cases at rates between 58-82% on legal queries, undermining precedent-based law
Major Law Firm Sanctioned for AI-Generated Fake Cases
Morgan & Morgan, the nation’s largest personal injury law firm, faced sanctions totaling $5,000 in February 2025 after attorneys filed briefs in Wyoming federal court citing eight completely fabricated legal cases generated by artificial intelligence. Judge Kelly H. Rankin imposed a $3,000 fine on one attorney and removed the lawyer from the case entirely. This incident demonstrates that even well-resourced legal giants fall victim to AI hallucinations when attorneys prioritize efficiency over verification, raising serious questions about whether the pursuit of billable speed has compromised fundamental professional duties.
Epidemic of Fabricated Citations Spreads Through Courts
Thomson Reuters Westlaw conducted a comprehensive study between June 30 and August 1, 2025, identifying 22 separate cases containing citations to non-existent legal precedents, with many resulting in court-imposed sanctions. Yale researcher Matthew Dahl has tracked hundreds of instances across the United States since the phenomenon emerged publicly in 2023. The scale of the problem extends far beyond isolated incidents—academic studies reveal that general chatbots hallucinate at staggering rates of 58-82% when answering legal queries, while even specialized legal AI models fabricate fake citations for one in six queries or more according to Stanford research.
Original ChatGPT Lawyer Case Set Dangerous Precedent
The crisis began in 2023 when New York attorney Steven A. Schwartz filed a federal brief in Mata v. Avianca citing six completely fabricated cases with realistic-sounding names like “Varghese v. China Southern Airlines.” Judge P. Kevin Castel discovered the fraud after opposing counsel could not verify the citations, ultimately fining Schwartz and his firm $5,000. Chief Justice John Roberts addressed the incident in his 2023 judiciary report, warning the legal profession about AI’s unreliability in high-stakes work. The case exposed how large language models, trained on vast datasets without true comprehension, generate plausible-looking fabrications that mislead even experienced attorneys who fail to verify outputs.
System Failure Erodes Public Trust in Justice
The proliferation of fake citations threatens the foundation of American jurisprudence, which relies on precedent-based law requiring accurate case references. University of Miami professor Christina Frohock called the risk “scary,” noting that fabricated precedents could sway legal disputes and even propagate into court orders if judges unknowingly rely on AI-generated fakes. Courts face increased burdens verifying every brief while clients risk losing cases due to attorney incompetence. One Colorado attorney received a 90-day suspension for denying AI use after being caught. Courts consistently emphasize that verification duties remain unchanged regardless of research methods—attorneys cannot blame technology for their failures to uphold professional standards established long before AI existed.
[Eugene Volokh] AI Hallucinations in Filing by a Top Law Firm https://t.co/ESl5AzHl62
— Volokh Conspiracy (@VolokhC) April 21, 2026
Despite growing sanctions ranging from $1,000 to $5,000 and professional discipline including license suspensions, no federal ban on AI filing exists as of late 2025. Legal experts note that firms with established AI verification policies and training programs have successfully avoided sanctions, suggesting the solution lies in human oversight rather than technology abandonment. However, the pattern reveals a deeper systemic problem: attorneys at prestigious firms prioritizing cost savings and billable efficiency over their fundamental duty to verify facts presented to courts. This represents yet another example of professionals cutting corners while ordinary Americans bear the consequences when the justice system fails to deliver reliable outcomes based on actual law rather than computer-generated fiction.
Sources:
Thomson Reuters – GenAI Hallucinations in Legal Research
Stanford HAI – AI on Trial: Legal Models Hallucinate 1 Out of 6 or More Benchmarking Queries
Klemchuk – AI Hallucinations in Court Filings
Cronkite News – Lawyers, AI Hallucinations, and ChatGPT
NCSC – Legal Practitioners Guide to AI Hallucinations
Damien Charlotin – AI Hallucinations Database
Sterne Kessler – AI Hallucinations in Court Filings and Orders: A 2025 Review



