AI Hallucinated a Case. The Lawyer Got Fined $10,000. You’re Next If You Do This.
A $10,000 Lesson the Legal World Will Never Forget
In 2023, a federal judge in New York handed down a $10,000 fine to a lawyer who submitted court documents filled with fake case citations. The cases didn’t exist. The quotes were fabricated. The legal precedents were completely made up. And the source of all this fiction? ChatGPT.
The attorney, Steven Schwartz of the firm Levidow, Levidow & Oberman, used the AI chatbot to help with legal research for a personal injury case. When the opposing counsel and the judge couldn’t find the cited cases, the truth came out fast. The lawyer had never verified a single citation. He simply trusted the AI and submitted the work.
The result was public humiliation, a hefty fine, and a cautionary tale that spread across every law school and courtroom in the country. If you work in law — or any profession that depends on accuracy — this story matters deeply to you.
What Is an AI Hallucination and Why Is It So Dangerous?
The term “hallucination” in AI refers to when a system like ChatGPT, Bard, or any large language model generates information that sounds completely real but is entirely made up. The AI doesn’t know it’s lying. It isn’t trying to deceive you. It simply predicts the next most likely word based on patterns in its training data — and sometimes those patterns lead it straight off a cliff.
Here’s what makes AI hallucinations especially dangerous in legal work:
- They sound authoritative. Fake cases come with realistic-sounding names, dates, jurisdictions, and citations. Nothing looks out of place on the surface.
- They are hard to catch without verification. Unless you physically look up every case, you might never know it doesn’t exist.
- They can survive internal review. If your entire team trusts the AI and no one double-checks, false information can make it all the way to a judge’s desk.
- The AI does not flag its own mistakes. It will state invented facts with the same confident tone it uses for accurate ones.
In law, a single false citation can destroy a case, damage your client’s trust, and put your license at serious risk.
The Exact Mistakes That Got the Lawyer Fined
Looking at what happened in the Schwartz case, a clear pattern of errors stands out. Understanding these mistakes is the first step to making sure you never repeat them.
1. Using AI as a Research Tool Without Verification
The lawyer asked ChatGPT to find supporting case law. The AI produced a list of cases that appeared legitimate. He used them directly in a legal brief without checking whether they were real. This is the core error. AI can assist with research, but it cannot replace verification. Every citation must be confirmed in an actual legal database like Westlaw or LexisNexis.
2. Trusting Output Because It Looked Professional
One of the most common traps with AI tools is that their output looks polished and professional. Full case names, volume numbers, page references — it all appears legitimate. The formatting doesn’t tell you whether the content is real. Always treat AI-generated citations as unverified leads, not confirmed sources.
3. Not Disclosing AI Use to the Court
Judges across the country are now requiring attorneys to disclose when AI tools were used in preparing legal documents. Failing to disclose this is itself becoming an ethical violation in many jurisdictions. The legal profession is evolving quickly around this issue, and ignorance of local rules is not an acceptable defense.
4. Ignoring Professional Responsibility Rules
Attorney ethics rules — including the ABA Model Rules of Professional Conduct — require lawyers to provide competent representation. Rule 1.1 specifically covers this. Using a tool you don’t fully understand, without taking steps to verify its output, can constitute incompetence under professional standards. The fine in the Schwartz case was only part of the consequence. The reputational damage was far greater.
How This Connects to Malpractice Risk
Beyond court sanctions and fines, there is the very real threat of legal malpractice. If a client loses a case because fabricated citations weakened their legal argument, they have grounds to sue their attorney for damages. Malpractice claims based on AI errors are a new and growing area of concern for the legal profession.
Consider the chain of consequences:
- Attorney uses AI to research and draft legal documents.
- AI produces fake or inaccurate case law.
- Attorney fails to verify and submits the work.
- The court identifies the false citations.
- The case is harmed or dismissed.
- The client loses and files a malpractice claim.
- The attorney faces sanctions, fines, and a lawsuit.
Each step in that chain could have been stopped at step two. Verification is the firewall between AI assistance and professional disaster.
It’s Not Just Lawyers at Risk
While the legal world offers the most dramatic and well-publicized examples, the danger of AI hallucinations extends to anyone who presents information as fact in a professional setting. Doctors relying on AI for clinical references, journalists using AI to pull quotes, financial advisors using AI-generated data — all of these professionals face similar risks if they skip the verification step.
The lesson isn’t that AI is bad. The lesson is that AI is a tool, and every tool has limits. A hammer can build a house or break your thumb. The difference is how you use it.
What You Should Actually Do When Using AI for Professional Work
Here is a straightforward set of practices that can protect you from the same fate:
- Treat every AI output as a draft, not a final product. No AI-generated content should go anywhere professional without a human review layer.
- Verify all citations, facts, and statistics independently. Use primary sources. If the AI cites a case, look it up. If it references a study, find the original study.
- Understand the tool you’re using. Know what AI can and cannot do. Understand that it doesn’t “know” facts — it generates plausible text based on patterns.
- Stay current on disclosure rules in your jurisdiction. Rules around AI use in legal documents are changing rapidly. Check your local court rules and your state bar’s guidance regularly.
- Create internal review processes for AI-assisted work. If your firm or team uses AI tools, build in a mandatory verification step before anything gets submitted or published.
- Document your verification process. In the event of a dispute, being able to show that you took responsible steps to verify AI output can make a significant difference in how you are judged.
What the Courts Are Saying Now
Following the Schwartz case, judges around the country moved quickly. Multiple federal and state courts have issued standing orders requiring attorneys to certify that any AI-generated content has been reviewed for accuracy by a human. Some courts now require explicit disclosure of which AI tools were used and how the output was verified.
In one standing order, Judge Brantley Starr of the Northern District of Texas required attorneys to certify that they had not used AI tools — or if they had, that a human had reviewed the citations for accuracy. Violations of such orders carry the same weight as any other contempt or ethics violation.
This is only the beginning. As AI becomes more common in legal practice, regulation around its use will tighten. Getting ahead of that curve is not just smart — it’s necessary for survival in the profession.
The Bigger Picture: Ethics in the Age of AI
Attorney ethics have always been built on a foundation of trust. Clients trust their lawyers to act in their best interest. Courts trust attorneys to present accurate information. The entire legal system depends on this basic good faith.
AI doesn’t break that system on its own. But uncritical use of AI — where professionals hand off their judgment to a machine and stop thinking for themselves — absolutely can.
The $10,000 fine Steven Schwartz received was a warning shot. Courts and bar associations have made clear that they will not accept “the AI made a mistake” as an excuse. You are responsible for everything you put your name on. Full stop.
Technology will keep advancing. New AI tools will get smarter, faster, and more convincing. The risk of hallucinations may decrease over time, but it will never reach zero. Your professional judgment, your verification habits, and your ethical standards are the only reliable safeguards you have.
Final Thought: Don’t Let a Chatbot End Your Career
Using AI tools to work more efficiently is completely reasonable. Many lawyers, doctors, writers, and researchers use them every day without incident. The difference between those who use AI safely and those who end up in front of a disciplinary board comes down to one thing: they never stopped being the professional in the room.
The AI is a tool. You are the expert. Never get those two things confused.
Check every citation. Read every output critically. Disclose AI use when required. And remember that your license, your reputation, and your client’s case are worth far more than the time you save by skipping the verification step.
The lawyer who got fined $10,000 learned that the hard way. You don’t have to.














