As AI-generated fake content mars legal cases, states want guardrails

CoCounsel Legal is an artificial intelligence tool that acts as a virtual assistant for legal professionals. More people in the legal field are using AI to automate repetitive tasks and save time, but hallucinations have led to fake cases and false information in legal documents. (Photo by Madyson Fitzgerald/Stateline)
Last spring, Illinois county judge Jeffrey Goffinet noticed something startling: A legal brief filed in his courtroom cited a case that did not exist.
Goffinet, an associate judge in Williamson County, looked through two legal research systems and then headed to the courthouse library — a place he hadn’t visited in years — to consult the book that purportedly listed the case. The case wasn’t in it.
The fake case, generated by artificial intelligence, came across Goffinet’s desk just a few months after the Illinois Supreme Court’s policy on the use of AI in the courts took effect. Goffinet co-chaired a task force that informed that policy, which allows the use of AI as long as it complies with existing legal and ethical standards.
“People are going to use [AI], and the courts are not going to be able to be a dam across a river that’s already flowing at flood capacity,” Goffinet said. “We have to learn how to coexist with it.”
As more false quotes, fake court cases and incorrect information appear in legal documents generated by AI, state bar associations, state court systems and national law organizations are issuing guidance on its use in the legal field. A handful of states are considering or enacting legislation to address the issue, and many courts and professional associations are focused on education for attorneys.
Federal judge mulls sanctions for attorneys who used AI in court filing
From divorce cases to discrimination lawsuits, AI-generated fake content can cause evidence to be dismissed and motions to be denied.
While some states urge attorneys to lean on existing guidance about accuracy and transparency, the new policies address AI concerns related to confidentiality, competency and costs. Most policies and opinions encourage attorneys to educate themselves and to use proprietary AI tools that prevent sensitive data from being entered into open source systems. Since AI tools could also increase efficiency, several policies advise attorneys to charge less if they spend less time on cases.
Some states, such as Ohio, also ban the use of artificial intelligence for certain legal tasks. In Ohio, courts are prohibited from using AI to translate legal forms, court orders and similar content that may affect the outcome of a case.
Several states have also advised legal professionals to adhere to the American Bar Association’s formal opinion of ethical AI use in law.
Artificial intelligence can help attorneys and law firms by automating administrative tasks, analyzing contracts and organizing documents. Generative AI can also be used to draft legal documents, including court briefs. Experts say the use of AI productivity tools can save legal professionals time and reduce the risk of human error in everyday tasks.
But law professionals nationwide have faced fines and license suspensions, among other consequences, for submitting legal documents citing false quotes, cases or information.
Many legal professionals are likely to not notice instances in which an AI system is “hallucinating,” or confidently making statements that are not true, said Rabihah Butler, the manager for enterprise content for Risk, Fraud and Government at the Thomson Reuters Institute. The institute is a research subsidiary of the Thomson Reuters company, which sells an AI system meant to help lawyers.
AI has such confidence, and it can appear so polished, that if you're not paying attention and doing your due diligence, the hallucination is being treated as a factual piece of information.
– Rabihah Butler, manager for enterprise content for Risk, Fraud and Government at the Thomson Reuters Institute
Courts and law organizations will need to consider education, sanctions and punitive actions to ensure law professionals are using AI appropriately, Butler said.
“AI has such confidence, and it can appear so polished, that if you’re not paying attention and doing your due diligence, the hallucination is being treated as a factual piece of information,” she said.
Since the beginning of 2025, there have been 518 documented cases in which generative AI produced hallucinated content used in U.S. courts, according to a database by Damien Charlotin, a senior research fellow at the HEC Paris business school.
“So far, if we’re looking at the institutional response, there’s not a lot because people are not very sure how to handle this kind of issue,” Charlotin said. “Everyone is aware that some lawyers are using artificial intelligence in their day-to-day work. Most people are aware that the technology is not very mature. But it’s still hard to prevent a mistake.”
State guidance
As of Jan. 23, state bar associations or similar entities have issued formal guidance on the use of AI in at least 10 states and the District of Columbia, typically in the form of an ethics opinion. Those aren’t enforceable as law, but spell out proper conduct.
In February, for example, the Professional Ethics Committee for the State Bar of Texas issued an ethics opinion that outlines issues that may arise from law professionals using AI. Texas lawyers should have a basic understanding of generative AI tools and guardrails to protect client confidentiality, it said. They should also verify any content generated by AI and refrain from charging clients for the time saved by using AI tools.
Legal professionals must be aware of their own competency with AI tools, said Brad Johnson, the executive director of the Texas Center for Legal Ethics.
“A really important takeaway from the opinion is that if a lawyer is considering using a generative AI tool in the practice of law, the lawyer has to have a reasonable and current understanding of the technology because only then can a lawyer really evaluate the risks that are associated with it,” he said.
Court systems in at least 11 states — Arizona, Arkansas, California, Connecticut, Delaware, Illinois, New York, Ohio, South Carolina, Vermont and Virginia — have established policies or issued rules of conduct regarding AI use by law professionals.
Illinois, for instance, allows lawyers to use artificial intelligence and does not require disclosure. The policy also emphasizes that judges will ultimately be responsible for their decisions, regardless of “technological advancements.”
“The task force wanted to emphasize that as judges, what we bring to the table is our humanity,” said Goffinet, the associate judge. “And we cannot abdicate our humanity in favor of an AI-generated decision or opinion.”
Former lawyer seeks reinstatement after sanctions, arrest and contempt findings
Some state lawmakers have tried to address the issue through legislation. Last year, Louisiana Republican Gov. Jeff Landry signed a measure that requires attorneys to use “reasonable diligence” to verify the authenticity of evidence, including content generated by artificial intelligence. The law also allows parties in civil cases to raise concerns about the admissibility of evidence if they suspect it was generated or altered by artificial intelligence.
California Democratic state Sen. Tom Umberg also introduced legislation last year that would require attorneys to ensure confidential information is not entered into a public generative AI system. The measure, which was approved by the Senate Judiciary Committee last week, also would require attorneys to ensure that reasonable steps are taken to verify the accuracy of generative AI material.
Attorney education
It’s also important for state bar associations and law schools to provide education on artificial intelligence, said Michael Hensley, a counsel at FBT Gibbons and an advocate for the safe use of AI in California courts. AI has the ability to reduce research time just like online legal research systems, but it requires training, he said.
“I would hope the state bar would have training for this,” Hensley said. “And I think it’s absolutely imperative that law schools have a session on AI.”
In a Bloomberg Law survey conducted last spring, 51% of the more than 750 respondents said their law firms purchased or invested in generative artificial intelligence tools. Another 21% said they planned to purchase AI tools within the next year. Attorneys reported using generative AI for general legal research, drafting communications, summarizing legal narratives, reviewing legal documents and other work.
Of the law firms that were not using generative AI, attorneys cited incorrect or unreliable output, ethical issues, security risks and data privacy as the top reasons.
While attorneys and law firms have become more comfortable with AI tools, courts have been more apprehensive, said Diane Robinson, a principal court research associate at the National Center for State Courts. Robinson is also project director at the Thomson Reuters Institute/NCSC AI Policy Consortium for Law and Courts, an association of legal practitioners and researchers developing guidance and resources for the use of AI in courts.
AI has the potential to improve case processing and can allow people needing legal advice to find information by using AI chatbots, she said. But, she added, courts are still struggling with evidence altered by AI and briefs littered with hallucinations.
“Fake evidence is nothing new,” Robinson said. “People have been altering photographs as long as there were photographs. But with AI, the ability to create videos, audio and pictures has become very easy, and courts are really struggling with it.”
Charlotin, of HEC Paris, said most courts and professional associations will continue to focus on education right now.
“You cannot prevent a mistake just by telling people, ‘Don’t make a mistake,’” Charlotin said. “That doesn’t work. It’s more about setting up processes to make people aware of it, then they can set up processes to work on dealing with it.”
Stateline reporter Madyson Fitzgerald can be reached at mfitzgerald@stateline.org.
This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.