Normal view

There are new articles available, click to refresh the page.
Today — 27 January 2026Main stream

As AI-generated fake content mars legal cases, states want guardrails

27 January 2026 at 11:00
CoCounsel Legal is an artificial intelligence tool that acts as a virtual assistant for legal professionals. More people in the legal field are using AI to automate repetitive tasks and save time, but hallucinations have led to fake cases and false information in legal documents.

CoCounsel Legal is an artificial intelligence tool that acts as a virtual assistant for legal professionals. More people in the legal field are using AI to automate repetitive tasks and save time, but hallucinations have led to fake cases and false information in legal documents. (Photo by Madyson Fitzgerald/Stateline)

Last spring, Illinois county judge Jeffrey Goffinet noticed something startling: A legal brief filed in his courtroom cited a case that did not exist.

Goffinet, an associate judge in Williamson County, looked through two legal research systems and then headed to the courthouse library — a place he hadn’t visited in years — to consult the book that purportedly listed the case. The case wasn’t in it.

The fake case, generated by artificial intelligence, came across Goffinet’s desk just a few months after the Illinois Supreme Court’s policy on the use of AI in the courts took effect. Goffinet co-chaired a task force that informed that policy, which allows the use of AI as long as it complies with existing legal and ethical standards.

“People are going to use [AI], and the courts are not going to be able to be a dam across a river that’s already flowing at flood capacity,” Goffinet said. “We have to learn how to coexist with it.”

As more false quotes, fake court cases and incorrect information appear in legal documents generated by AI, state bar associations, state court systems and national law organizations are issuing guidance on its use in the legal field. A handful of states are considering or enacting legislation to address the issue, and many courts and professional associations are focused on education for attorneys.

Federal judge mulls sanctions for attorneys who used AI in court filing

From divorce cases to discrimination lawsuits, AI-generated fake content can cause evidence to be dismissed and motions to be denied.

While some states urge attorneys to lean on existing guidance about accuracy and transparency, the new policies address AI concerns related to confidentiality, competency and costs. Most policies and opinions encourage attorneys to educate themselves and to use proprietary AI tools that prevent sensitive data from being entered into open source systems. Since AI tools could also increase efficiency, several policies advise attorneys to charge less if they spend less time on cases.

Some states, such as Ohio, also ban the use of artificial intelligence for certain legal tasks. In Ohio, courts are prohibited from using AI to translate legal forms, court orders and similar content that may affect the outcome of a case.

Several states have also advised legal professionals to adhere to the American Bar Association’s formal opinion of ethical AI use in law.

Artificial intelligence can help attorneys and law firms by automating administrative tasks, analyzing contracts and organizing documents. Generative AI can also be used to draft legal documents, including court briefs. Experts say the use of AI productivity tools can save legal professionals time and reduce the risk of human error in everyday tasks.

But law professionals nationwide have faced fines and license suspensions, among other consequences, for submitting legal documents citing false quotes, cases or information.

Many legal professionals are likely to not notice instances in which an AI system is “hallucinating,” or confidently making statements that are not true, said Rabihah Butler, the manager for enterprise content for Risk, Fraud and Government at the Thomson Reuters Institute. The institute is a research subsidiary of the Thomson Reuters company, which sells an AI system meant to help lawyers.

AI has such confidence, and it can appear so polished, that if you're not paying attention and doing your due diligence, the hallucination is being treated as a factual piece of information.

– Rabihah Butler, manager for enterprise content for Risk, Fraud and Government at the Thomson Reuters Institute

Courts and law organizations will need to consider education, sanctions and punitive actions to ensure law professionals are using AI appropriately, Butler said.

“AI has such confidence, and it can appear so polished, that if you’re not paying attention and doing your due diligence, the hallucination is being treated as a factual piece of information,” she said.

Since the beginning of 2025, there have been 518 documented cases in which generative AI produced hallucinated content used in U.S. courts, according to a database by Damien Charlotin, a senior research fellow at the HEC Paris business school.

“So far, if we’re looking at the institutional response, there’s not a lot because people are not very sure how to handle this kind of issue,” Charlotin said. “Everyone is aware that some lawyers are using artificial intelligence in their day-to-day work. Most people are aware that the technology is not very mature. But it’s still hard to prevent a mistake.”

State guidance

As of Jan. 23, state bar associations or similar entities have issued formal guidance on the use of AI in at least 10 states and the District of Columbia, typically in the form of an ethics opinion. Those aren’t enforceable as law, but spell out proper conduct.

In February, for example, the Professional Ethics Committee for the State Bar of Texas issued an ethics opinion that outlines issues that may arise from law professionals using AI. Texas lawyers should have a basic understanding of generative AI tools and guardrails to protect client confidentiality, it said. They should also verify any content generated by AI and refrain from charging clients for the time saved by using AI tools.

Legal professionals must be aware of their own competency with AI tools, said Brad Johnson, the executive director of the Texas Center for Legal Ethics.

“A really important takeaway from the opinion is that if a lawyer is considering using a generative AI tool in the practice of law, the lawyer has to have a reasonable and current understanding of the technology because only then can a lawyer really evaluate the risks that are associated with it,” he said.

Court systems in at least 11 states — Arizona, Arkansas, California, Connecticut, Delaware, Illinois, New York, Ohio, South Carolina, Vermont and Virginia — have established policies or issued rules of conduct regarding AI use by law professionals.

Illinois, for instance, allows lawyers to use artificial intelligence and does not require disclosure. The policy also emphasizes that judges will ultimately be responsible for their decisions, regardless of “technological advancements.”

“The task force wanted to emphasize that as judges, what we bring to the table is our humanity,” said Goffinet, the associate judge. “And we cannot abdicate our humanity in favor of an AI-generated decision or opinion.”

Former lawyer seeks reinstatement after sanctions, arrest and contempt findings

Some state lawmakers have tried to address the issue through legislation. Last year, Louisiana Republican Gov. Jeff Landry signed a measure that requires attorneys to use “reasonable diligence” to verify the authenticity of evidence, including content generated by artificial intelligence. The law also allows parties in civil cases to raise concerns about the admissibility of evidence if they suspect it was generated or altered by artificial intelligence.

California Democratic state Sen. Tom Umberg also introduced legislation last year that would require attorneys to ensure confidential information is not entered into a public generative AI system. The measure, which was approved by the Senate Judiciary Committee last week, also would require attorneys to ensure that reasonable steps are taken to verify the accuracy of generative AI material.

Attorney education

It’s also important for state bar associations and law schools to provide education on artificial intelligence, said Michael Hensley, a counsel at FBT Gibbons and an advocate for the safe use of AI in California courts. AI has the ability to reduce research time just like online legal research systems, but it requires training, he said.

“I would hope the state bar would have training for this,” Hensley said. “And I think it’s absolutely imperative that law schools have a session on AI.”

In a Bloomberg Law survey conducted last spring, 51% of the more than 750 respondents said their law firms purchased or invested in generative artificial intelligence tools. Another 21% said they planned to purchase AI tools within the next year. Attorneys reported using generative AI for general legal research, drafting communications, summarizing legal narratives, reviewing legal documents and other work.

Of the law firms that were not using generative AI, attorneys cited incorrect or unreliable output, ethical issues, security risks and data privacy as the top reasons.

While attorneys and law firms have become more comfortable with AI tools, courts have been more apprehensive, said Diane Robinson, a principal court research associate at the National Center for State Courts. Robinson is also project director at the Thomson Reuters Institute/NCSC AI Policy Consortium for Law and Courts, an association of legal practitioners and researchers developing guidance and resources for the use of AI in courts.

AI has the potential to improve case processing and can allow people needing legal advice to find information by using AI chatbots, she said. But, she added, courts are still struggling with evidence altered by AI and briefs littered with hallucinations.

“Fake evidence is nothing new,” Robinson said. “People have been altering photographs as long as there were photographs. But with AI, the ability to create videos, audio and pictures has become very easy, and courts are really struggling with it.”

Charlotin, of HEC Paris, said most courts and professional associations will continue to focus on education right now.

“You cannot prevent a mistake just by telling people, ‘Don’t make a mistake,’” Charlotin said. “That doesn’t work. It’s more about setting up processes to make people aware of it, then they can set up processes to work on dealing with it.”

Stateline reporter Madyson Fitzgerald can be reached at mfitzgerald@stateline.org.

This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

Before yesterdayMain stream

States will keep pushing AI laws despite Trump’s efforts to stop them

13 December 2025 at 14:28
A billboard advertises an artificial intelligence company.

A billboard advertises an artificial intelligence company in San Francisco in September. California is among the states leading the way on AI regulations, but an executive order signed by President Donald Trump seeks to override state laws on the technology. (Photo by Justin Sullivan/Getty Images)

State lawmakers of both parties said they plan to keep passing laws regulating artificial intelligence despite President Donald Trump’s efforts to stop them.

Trump signed an executive order Thursday evening that aims to override state artificial intelligence laws. He said his administration must work with Congress to develop a national AI policy, but that in the meantime, it will crack down on state laws.

The order comes after several other Trump administration efforts to rein in state AI laws and loosen restrictions for developers and technology companies.

But despite those moves, state lawmakers are continuing to prefile legislation related to artificial intelligence in preparation for their 2026 legislative sessions. Opponents are also skeptical about — and likely to sue over — Trump’s proposed national framework and his ability to restrict states from passing legislation.

“I agree on not overregulating, but I don’t believe the federal government has the right to take away my right to protect my constituents if there’s an issue with AI,” said South Carolina Republican state Rep. Brandon Guffey, who penned a letter to Congress opposing legislation that would curtail state AI laws.

The letter, signed by 280 state lawmakers from across the country, shows that state legislators from both parties want to retain their ability to craft their own AI legislation, said South Dakota Democratic state Sen. Liz Larson, who co-wrote the letter.

Earlier this year, South Dakota Republican Gov. Larry Rhoden signed the state’s first artificial intelligence law, authored by Larson, prohibiting the use of a deepfake — a digitally altered photo or video that can make someone appear to be doing just about anything — to influence an election.

South Dakota and other states with more comprehensive AI laws, such as California and Colorado, would see their efforts overruled by Trump’s order, Larson said.

“To take away all of this work in a heartbeat and then prevent states from learning those lessons, without providing any alternative framework at the federal level, is just irresponsible,” she said. “It takes power away from the states.”

Trump’s efforts

Thursday’s executive order will establish an AI Litigation Task Force to bring court challenges against states with AI-related laws, with exceptions for a few issues such as child safety protections and data center infrastructure.

The order also directs the secretary of commerce to notify states that they could lose certain funds under the Broadband Equity, Access, and Deployment Program if their laws conflict with national AI policy priorities.

Trump said the order would help the United States beat China in dominating the burgeoning AI industry, adding that Chinese President Xi Jinping did not have similar restraints.

“This will not be successful unless they have one source of approval or disapproval,” he said. “It’s got to be one source. They can’t go to 50 different sources.”

In July, the Trump administration released the AI Action Plan, an initiative aimed at reducing regulatory barriers and accelerating the growth of AI infrastructure, including data centers. Trump also has revoked Biden-era AI safety and anti-discrimination policies.

The tech industry had lobbied for Trump’s order.

“This executive order is an important step towards ensuring that smart, unified federal policy — not bureaucratic red tape — secures America’s AI dominance for generations to come,” said Amy Bos, vice president of government affairs for NetChoice, a technology trade association, in a statement to Stateline.

As the administration looks to address increasing threats to national defense and cybersecurity, a centralized, national approach to AI policy is best, said Paul Lekas, the executive vice president for global public policy and government affairs at the Software & Information Industry Association.

“The White House is very motivated to ensure that there aren’t barriers to innovation and that we can continue to move forward,” he said. “And the White House is concerned that there is state legislation that may be purporting to regulate interstate commerce. We would be creating a patchwork that would be very hard for innovation.”

Congressional Republicans tried twice this year to pass moratoriums on state AI laws, but both efforts failed.

In the absence of a comprehensive federal artificial intelligence policy, state lawmakers have worked to regulate the rapid development of AI systems and protect consumers from potential harms.

Trump’s executive order could cause concern among lawmakers who fear possible blowback from the administration for their efforts, said Travis Hall, the director for state engagement at the Center for Democracy & Technology, a nonprofit that advocates for digital rights and freedom of expression.

“I can’t imagine that state legislators aren’t going to continue to try to engage with these technologies in order to help protect and respond to the concerns of their constituents,” Hall said. “However, there’s no doubt that the intent of this executive order is to chill any actual oversight, accountability or regulation.”

State rules

This year, 38 states adopted or enacted measures related to artificial intelligence, according to a National Conference of State Legislatures database. Numerous state lawmakers have also prefiled legislation for 2026.

But tensions have grown over the past few months as Trump has pushed for deregulation and states have continued to create guardrails.

It doesn't hold any water and it doesn't have any teeth because the president doesn't have the authority to supersede state law.

– Colorado Democratic state Rep. Brianna Titone

In 2024, Colorado Democratic Gov. Jared Polis signed the nation’s first comprehensive artificial intelligence framework into law. Under the law, developers of AI systems will be required to protect consumers from potential algorithmic discrimination.

But implementation of the law was postponed a few months until June 2026 after negotiations stalled during a special legislative session this summer aiming to ensure the law did not hinder technological innovation. And a spokesperson for Polis told Bloomberg in May that the governor supported a U.S. House GOP proposal that would impose a moratorium on state AI laws.

Trump’s executive order, which mentions the Colorado law as an example of legislation the administration may challenge, has caused uncertainty among some state lawmakers focused on regulating AI. But Colorado state Rep. Brianna Titone and state Sen. Robert Rodriguez, Democratic sponsors of the law, said they will continue their work.

Unless Congress passes legislation to restrict states from passing AI laws, Trump’s executive order can easily be challenged and overturned in court, she said.

“This is just a bunch of hot air,” Titone said. “It doesn’t hold any water and it doesn’t have any teeth because the president doesn’t have the authority to supersede state law. We will continue to do what we need to do for the people in our state, just like we always have, unless there is an actual preemption in federal law.”

California and Illinois also have been at the forefront of artificial intelligence legislation over the past few years. In September, California Democratic Gov. Gavin Newsom signed the nation’s first law establishing a comprehensive legal framework for developers of the most advanced, large-scale artificial intelligence models, known as frontier artificial intelligence models. Those efforts are aimed at preventing AI models from causing catastrophic harm involving dozens of casualties or billion-dollar damages.

California officials have said they are considering a legal challenge over Trump’s order, and other states and groups are likely to sue as well.

Republican officials and GOP-led states, including some Trump allies, also are pushing forward with AI regulations. Efforts to protect consumers from AI harms are being proposed in Missouri, Ohio, Oklahoma, South Carolina, Texas and Utah.

Earlier this month, Florida Republican Gov. Ron DeSantis also unveiled a proposal for an AI Bill of Rights. The proposal aims to strengthen consumer protections related to AI and to address the growing impact data centers are having on local communities.

In South Carolina, Guffey said he plans to introduce a bill in January that would place rules on AI chatbots. Chatbots that use artificial intelligence are able to simulate conversations with users, but raise privacy and safety concerns.

Artificial intelligence is developing fast, Guffey noted. State lawmakers have been working on making sure the technology is safe to use — and they’ll keep doing that to protect their constituents, he said.

“The problem is that it’s not treated like a product — it’s treated like a service,” Guffey said. “If it was treated like a product, we have consumer protection laws where things could be recalled and adjusted and then put back out there once they’re safe. But that is not the case with any of this technology.”

Stateline reporter Madyson Fitzgerald can be reached at mfitzgerald@stateline.org.

This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

❌
❌