Why Judges Are Now Asking ‘Did You Use ChatGPT?’ Under Oath

Why Judges Are Now Asking ‘Did You Use ChatGPT?’ Under Oath

The Courtroom Is Changing Fast

Not long ago, the idea of a judge asking a lawyer whether they used an artificial intelligence tool to write their legal documents would have sounded like something out of a science fiction story. Today, it is becoming a routine part of how courts operate across the United States and in several other countries. Judges are now requiring lawyers, and sometimes even parties representing themselves, to disclose whether AI tools like ChatGPT were used in preparing court filings.

This shift is happening quickly, and for good reason. The legal system depends on accuracy, honesty, and trust. When AI tools enter the picture, all three of those values can be put at serious risk if the right guardrails are not in place.

What Triggered This Change in Courtrooms

The push for AI disclosure rules did not come out of nowhere. It came from a series of high-profile incidents where lawyers submitted court documents that contained completely made-up case citations generated by AI tools. These fake citations — sometimes called “hallucinations” in AI terminology — referenced cases that simply did not exist.

One of the most widely reported examples involved a lawyer in New York who submitted a legal brief that cited multiple court decisions. When the opposing side went to look up those cases, they could not find them anywhere. The reason was simple: ChatGPT had invented them. The lawyer had used the AI tool to help research and write the brief but had not fact-checked the output before submitting it to the court.

That case resulted in sanctions against the attorney. It also sent a clear message to courts everywhere: AI use in legal documents is a real issue that needs real oversight.

How Judges Are Responding

Judges across different jurisdictions have started putting formal AI disclosure requirements in place. Some have added these rules to their standard court orders. Others have issued standing orders that apply to all cases in their courtroom. The rules vary from judge to judge and court to court, but the core idea is the same: if you used AI to help prepare your documents, you need to say so.

Here are some common ways these disclosure requirements are being handled:

  • Written certification: Attorneys must sign a statement confirming whether AI was used and, if so, how it was used and how the output was verified.
  • In-court questioning: Some judges are asking attorneys directly during hearings whether AI tools played a role in drafting filings.
  • Sworn declarations: In more serious cases, parties may be required to submit sworn statements about their use of AI under penalty of perjury.
  • Ongoing review requirements: Some courts require that any AI-generated content be reviewed and verified by a licensed attorney before submission.

Why Disclosure Matters So Much in Legal Settings

The legal system runs on trust. Judges rely on the information presented to them being accurate and truthful. Lawyers have ethical obligations to verify the facts and law they present in court. When AI tools are used carelessly, those obligations can be violated without the attorney even realizing it.

AI language models like ChatGPT are designed to produce text that sounds confident and authoritative. They do not always flag when they are uncertain or when they are generating information that may not be accurate. In everyday use, this might not matter much. In a courtroom, it can have serious consequences for clients, for opposing parties, and for the integrity of the entire legal process.

Disclosure requirements exist to make sure that someone with actual legal knowledge and responsibility is standing behind every document submitted to the court. They are not meant to ban AI from legal practice. They are meant to ensure that AI is used responsibly.

The Difference Between Helpful and Harmful AI Use

It is important to understand that courts are not simply trying to block lawyers from using modern technology. Many judges and legal experts acknowledge that AI tools can genuinely help with legal work. They can assist with organizing arguments, summarizing long documents, drafting initial versions of contracts, and identifying relevant legal issues.

The problem is not AI itself. The problem is AI use that goes unchecked. There is a big difference between a lawyer who uses AI to draft a document and then carefully reviews every fact, every citation, and every legal claim before submitting it to the court — and a lawyer who copies AI output directly into a filing without any review at all.

Courts are trying to draw that line clearly. Disclosure requirements help make sure the right kind of human oversight is always in place.

What Happens When AI Use Is Hidden

Failing to disclose the use of AI when a court requires it can lead to serious professional and legal consequences. Depending on the jurisdiction and the circumstances, attorneys who hide AI use in their filings may face:

  • Monetary sanctions imposed by the court
  • Dismissal of their client’s case
  • Referrals to state bar disciplinary committees
  • Suspension or loss of their law license
  • Charges of fraud or contempt of court in extreme cases

For self-represented individuals, the consequences can also be significant. Courts expect honesty from everyone who appears before them, regardless of whether they have a lawyer. Using AI to prepare documents is not necessarily a problem, but misrepresenting how documents were prepared — or submitting inaccurate information — can undermine a person’s entire case.

How Different Courts Are Setting Their Own Rules

There is currently no single national standard in the United States for how courts must handle AI disclosure. This means the rules can look very different depending on where a case is being heard.

Some federal district courts have been among the first to implement formal written policies. Several federal judges have issued standing orders that explicitly address AI use. At the state level, courts in places like Texas, California, and Florida have seen early adoption of disclosure requirements, though the specific language and procedures vary.

International courts and legal systems are also grappling with the same questions. Courts in the United Kingdom, Canada, and Australia have begun issuing guidance or rules about AI use in submitted documents, reflecting the fact that this is a global challenge, not just an American one.

The Role of Bar Associations and Legal Ethics Boards

Beyond individual judges and courts, bar associations are also working to give attorneys clearer guidance on how to use AI ethically. Several state bar associations have issued formal ethics opinions addressing AI in legal practice. These opinions generally emphasize a few key principles:

  • Lawyers have a duty of competence, which now includes understanding the capabilities and limitations of AI tools they choose to use.
  • Lawyers have a duty of supervision, meaning they cannot simply delegate legal work to an AI system and walk away.
  • Lawyers must ensure the confidentiality of client information, which can be at risk when using AI platforms that store or process data.
  • Honesty obligations require attorneys to verify the accuracy of everything they submit to a court, regardless of how it was created.

These guidelines are helping to shape a new standard of professional responsibility in an era where AI tools are becoming a normal part of everyday work.

What This Means for Everyday People Using Courts

For regular people who find themselves involved in legal proceedings — whether as defendants, plaintiffs, or self-represented litigants — this trend has real practical meaning. If you are preparing your own court documents and you use an AI tool to help you do it, you may be required to say so in some jurisdictions.

More importantly, you should know that AI tools can make mistakes. They can generate incorrect information that sounds completely believable. Before you submit anything to a court, it is essential to verify every fact, every law, and every date mentioned in your documents. Submitting inaccurate information to a court — even unintentionally — can seriously damage your position.

If you are unsure about any of this, consulting with a licensed attorney is always the safest step. Many lawyers now offer limited-scope consultations specifically to help people review documents before they are filed.

Where This Is All Heading

The legal system is adapting, but it is doing so carefully and deliberately. Courts are not looking to punish people for using available tools. They are looking to protect the integrity of legal proceedings at a time when technology is moving faster than regulations can keep up.

Over the coming years, it is likely that more formal and consistent rules about AI use in litigation will develop. National judicial councils, legislative bodies, and bar associations are all working on frameworks that could eventually create clearer standards across different courts and jurisdictions.

For now, the clearest message from judges is this: if you used AI to help prepare your legal documents, be honest about it. Make sure a qualified human being has reviewed everything carefully. And never assume that something is accurate just because an AI tool said it with confidence.

The Bottom Line

The question “Did you use ChatGPT?” is now part of the legal conversation in courtrooms across the country. It reflects a real and serious challenge: how do you keep legal proceedings fair, accurate, and trustworthy when powerful AI tools are so easy to access and so easy to misuse?

The answer courts are landing on is not to ban the technology. It is to demand transparency about how it is used and to hold attorneys and litigants responsible for making sure that everything submitted to the court is accurate, honest, and properly verified. In a system built on truth and accountability, that seems like exactly the right approach.

Scroll to Top