Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

States will keep pushing AI laws despite Trump’s efforts to stop them

13 December 2025 at 14:28
A billboard advertises an artificial intelligence company.

A billboard advertises an artificial intelligence company in San Francisco in September. California is among the states leading the way on AI regulations, but an executive order signed by President Donald Trump seeks to override state laws on the technology. (Photo by Justin Sullivan/Getty Images)

State lawmakers of both parties said they plan to keep passing laws regulating artificial intelligence despite President Donald Trump’s efforts to stop them.

Trump signed an executive order Thursday evening that aims to override state artificial intelligence laws. He said his administration must work with Congress to develop a national AI policy, but that in the meantime, it will crack down on state laws.

The order comes after several other Trump administration efforts to rein in state AI laws and loosen restrictions for developers and technology companies.

But despite those moves, state lawmakers are continuing to prefile legislation related to artificial intelligence in preparation for their 2026 legislative sessions. Opponents are also skeptical about — and likely to sue over — Trump’s proposed national framework and his ability to restrict states from passing legislation.

“I agree on not overregulating, but I don’t believe the federal government has the right to take away my right to protect my constituents if there’s an issue with AI,” said South Carolina Republican state Rep. Brandon Guffey, who penned a letter to Congress opposing legislation that would curtail state AI laws.

The letter, signed by 280 state lawmakers from across the country, shows that state legislators from both parties want to retain their ability to craft their own AI legislation, said South Dakota Democratic state Sen. Liz Larson, who co-wrote the letter.

Earlier this year, South Dakota Republican Gov. Larry Rhoden signed the state’s first artificial intelligence law, authored by Larson, prohibiting the use of a deepfake — a digitally altered photo or video that can make someone appear to be doing just about anything — to influence an election.

South Dakota and other states with more comprehensive AI laws, such as California and Colorado, would see their efforts overruled by Trump’s order, Larson said.

“To take away all of this work in a heartbeat and then prevent states from learning those lessons, without providing any alternative framework at the federal level, is just irresponsible,” she said. “It takes power away from the states.”

Trump’s efforts

Thursday’s executive order will establish an AI Litigation Task Force to bring court challenges against states with AI-related laws, with exceptions for a few issues such as child safety protections and data center infrastructure.

The order also directs the secretary of commerce to notify states that they could lose certain funds under the Broadband Equity, Access, and Deployment Program if their laws conflict with national AI policy priorities.

Trump said the order would help the United States beat China in dominating the burgeoning AI industry, adding that Chinese President Xi Jinping did not have similar restraints.

“This will not be successful unless they have one source of approval or disapproval,” he said. “It’s got to be one source. They can’t go to 50 different sources.”

In July, the Trump administration released the AI Action Plan, an initiative aimed at reducing regulatory barriers and accelerating the growth of AI infrastructure, including data centers. Trump also has revoked Biden-era AI safety and anti-discrimination policies.

The tech industry had lobbied for Trump’s order.

“This executive order is an important step towards ensuring that smart, unified federal policy — not bureaucratic red tape — secures America’s AI dominance for generations to come,” said Amy Bos, vice president of government affairs for NetChoice, a technology trade association, in a statement to Stateline.

As the administration looks to address increasing threats to national defense and cybersecurity, a centralized, national approach to AI policy is best, said Paul Lekas, the executive vice president for global public policy and government affairs at the Software & Information Industry Association.

“The White House is very motivated to ensure that there aren’t barriers to innovation and that we can continue to move forward,” he said. “And the White House is concerned that there is state legislation that may be purporting to regulate interstate commerce. We would be creating a patchwork that would be very hard for innovation.”

Congressional Republicans tried twice this year to pass moratoriums on state AI laws, but both efforts failed.

In the absence of a comprehensive federal artificial intelligence policy, state lawmakers have worked to regulate the rapid development of AI systems and protect consumers from potential harms.

Trump’s executive order could cause concern among lawmakers who fear possible blowback from the administration for their efforts, said Travis Hall, the director for state engagement at the Center for Democracy & Technology, a nonprofit that advocates for digital rights and freedom of expression.

“I can’t imagine that state legislators aren’t going to continue to try to engage with these technologies in order to help protect and respond to the concerns of their constituents,” Hall said. “However, there’s no doubt that the intent of this executive order is to chill any actual oversight, accountability or regulation.”

State rules

This year, 38 states adopted or enacted measures related to artificial intelligence, according to a National Conference of State Legislatures database. Numerous state lawmakers have also prefiled legislation for 2026.

But tensions have grown over the past few months as Trump has pushed for deregulation and states have continued to create guardrails.

It doesn't hold any water and it doesn't have any teeth because the president doesn't have the authority to supersede state law.

– Colorado Democratic state Rep. Brianna Titone

In 2024, Colorado Democratic Gov. Jared Polis signed the nation’s first comprehensive artificial intelligence framework into law. Under the law, developers of AI systems will be required to protect consumers from potential algorithmic discrimination.

But implementation of the law was postponed a few months until June 2026 after negotiations stalled during a special legislative session this summer aiming to ensure the law did not hinder technological innovation. And a spokesperson for Polis told Bloomberg in May that the governor supported a U.S. House GOP proposal that would impose a moratorium on state AI laws.

Trump’s executive order, which mentions the Colorado law as an example of legislation the administration may challenge, has caused uncertainty among some state lawmakers focused on regulating AI. But Colorado state Rep. Brianna Titone and state Sen. Robert Rodriguez, Democratic sponsors of the law, said they will continue their work.

Unless Congress passes legislation to restrict states from passing AI laws, Trump’s executive order can easily be challenged and overturned in court, she said.

“This is just a bunch of hot air,” Titone said. “It doesn’t hold any water and it doesn’t have any teeth because the president doesn’t have the authority to supersede state law. We will continue to do what we need to do for the people in our state, just like we always have, unless there is an actual preemption in federal law.”

California and Illinois also have been at the forefront of artificial intelligence legislation over the past few years. In September, California Democratic Gov. Gavin Newsom signed the nation’s first law establishing a comprehensive legal framework for developers of the most advanced, large-scale artificial intelligence models, known as frontier artificial intelligence models. Those efforts are aimed at preventing AI models from causing catastrophic harm involving dozens of casualties or billion-dollar damages.

California officials have said they are considering a legal challenge over Trump’s order, and other states and groups are likely to sue as well.

Republican officials and GOP-led states, including some Trump allies, also are pushing forward with AI regulations. Efforts to protect consumers from AI harms are being proposed in Missouri, Ohio, Oklahoma, South Carolina, Texas and Utah.

Earlier this month, Florida Republican Gov. Ron DeSantis also unveiled a proposal for an AI Bill of Rights. The proposal aims to strengthen consumer protections related to AI and to address the growing impact data centers are having on local communities.

In South Carolina, Guffey said he plans to introduce a bill in January that would place rules on AI chatbots. Chatbots that use artificial intelligence are able to simulate conversations with users, but raise privacy and safety concerns.

Artificial intelligence is developing fast, Guffey noted. State lawmakers have been working on making sure the technology is safe to use — and they’ll keep doing that to protect their constituents, he said.

“The problem is that it’s not treated like a product — it’s treated like a service,” Guffey said. “If it was treated like a product, we have consumer protection laws where things could be recalled and adjusted and then put back out there once they’re safe. But that is not the case with any of this technology.”

Stateline reporter Madyson Fitzgerald can be reached at mfitzgerald@stateline.org.

This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

AI vs. AI: Patients deploy bots to battle health insurers that deny care

24 November 2025 at 11:00
As states continue to curb health insurers’ use of artificial intelligence, patients and doctors are arming themselves with AI tools to fight claims denials, prior authorizations and soaring medical bills. (Photo by Anna Claire Vollers/Stateline)

As states continue to curb health insurers’ use of artificial intelligence, patients and doctors are arming themselves with AI tools to fight claims denials, prior authorizations and soaring medical bills. (Photo by Anna Claire Vollers/Stateline)

As states strive to curb health insurers’ use of artificial intelligence, patients and doctors are arming themselves with AI tools to fight claims denials, prior authorizations and soaring medical bills.

Several businesses and nonprofits have launched AI-powered tools to help patients get their insurance claims paid and navigate byzantine medical bills, creating a robotic tug-of-war over who gets care and who foots the bill for it.

Sheer Health, a three-year-old company that helps patients and providers navigate health insurance and billing, now has an app that allows consumers to connect their health insurance account, upload medical bills and claims, and ask questions about deductibles, copays and covered benefits.

“You would think there would be some sort of technology that could explain in real English why I’m getting a bill for $1,500,” said cofounder Jeff Witten. The program uses both AI and humans to provide the answers for free, he said. Patients who want extra support in challenging a denied claim or dealing with out-of-network reimbursements can pay Sheer Health to handle those for them.

In North Carolina, the nonprofit Counterforce Health designed an AI assistant to help patients appeal their denied health insurance claims and fight large medical bills. The free service uses AI models to analyze a patient’s denial letter, then look through the patient’s policy and outside medical research to draft a customized appeal letter.

Other consumer-focused services use AI to catch billing errors or parse medical jargon. Some patients are even turning to AI chatbots like Grok for help.

A quarter of adults under age 30 said they used an AI chatbot at least once a month for health information or advice, according to a poll the health care research nonprofit KFF published in August 2024. But most adults said they were not confident that the health information is accurate.

State legislators on both sides of the aisle, meanwhile, are scrambling to keep pace, passing new regulations that govern how insurers, physicians and others use AI in health care. Already this year, more than a dozen states have passed laws regulating AI in health care, according to Manatt, a consulting firm.

“It doesn’t feel like a satisfying outcome to just have two robots argue back and forth over whether a patient should access a particular type of care,” said Carmel Shachar, assistant clinical professor of law and the faculty director of the Health Law and Policy Clinic at Harvard Law School.

“We don’t want to get on an AI-enabled treadmill that just speeds up.”

A black box

Health care can feel like a black box. If your doctor says you need surgery, for example, the cost depends on a dizzying number of factors, including your health insurance provider, your specific health plan, its copayment requirements, your deductible, where you live, the facility where the surgery will be performed, whether that facility and your doctor are in-network and your specific diagnosis.

Some insurers may require prior authorization before a surgery is approved. That can entail extensive medical documentation. After a surgery, the resulting bill can be difficult to parse.

Witten, of Sheer Health, said his company has seen thousands of instances of patients whose doctors recommend a certain procedure, like surgery, and then a few days before the surgery the patient learns insurance didn’t approve it.

You would think there would be some sort of technology that could explain in real English why I’m getting a bill for $1,500.

– Sheer Health co-founder Jeff Witten

In recent years, as more health insurance companies have turned to AI to automate claims processing and prior authorizations, the share of denied claims has risen. This year, 41% of physicians and other providers said their claims are denied more than 10% of the time, up from 30% of providers who said that three years ago, according to a September report from credit reporting company Experian.

Insurers on Affordable Care Act marketplaces denied nearly 1 in 5 in-network claims in 2023, up from 17% in 2021, and more than a third of out-of-network claims, according to the most recently available data from KFF.

Insurance giant UnitedHealth Group has come under fire in the media and from federal lawmakers for using algorithms to systematically deny care to seniors, while Humana and other insurers face lawsuits and regulatory investigations that allege they’ve used sophisticated algorithms to block or deny coverage for medical procedures.

Insurers say AI tools can improve efficiency and reduce costs by automating tasks that can involve analyzing vast amounts of data. And companies say they’re monitoring their AI to identify potential problems. A UnitedHealth representative pointed Stateline to the company’s AI Review Board, a team of clinicians, scientists and other experts that reviews its AI models for accuracy and fairness.

“Health plans are committed to responsibly using artificial intelligence to create a more seamless, real-time customer experience and to make claims management faster and more effective for patients and providers,” a spokesperson for America’s Health Insurance Plans, the national trade group representing health insurers, told Stateline.

But states are stepping up oversight.

Arizona, Maryland, Nebraska and Texas, for example, have banned insurance companies from using AI as the sole decisionmaker in prior authorization or medical necessity denials.

Dr. Arvind Venkat is an emergency room physician in the Pittsburgh area. He’s also a Democratic Pennsylvania state representative and the lead sponsor of a bipartisan bill to regulate the use of AI in health care.

He’s seen new technologies reshape health care during his 25 years in medicine, but AI feels wholly different, he said. It’s an “active player” in people’s care in a way that other technologies haven’t been.

“If we’re able to harness this technology to improve the delivery and efficiency of clinical care, that is a huge win,” said Venkat. But he’s worried about AI use without guardrails.

His legislation would force insurers and health care providers in Pennsylvania to be more transparent about how they use AI; require a human to make the final decision any time AI is used; and mandate that they show evidence of minimizing bias in their use of AI.

“In health care, where it’s so personal and the stakes are so high, we need to make sure we’re mandating in every patient’s case that we’re applying artificial intelligence in a way that looks at the individual patient,” Venkat said.

Patient supervision

Historically, consumers rarely challenge denied claims: A KFF analysis found fewer than 1% of health coverage denials are appealed. And even when they are, patients lose more than half of those appeals.

New consumer-focused AI tools could shift that dynamic by making appeals easier to file and the process easier to understand. But there are limits; without human oversight, experts say, the AI is vulnerable to mistakes.

“It can be difficult for a layperson to understand when AI is doing good work and when it is hallucinating or giving something that isn’t quite accurate,” said Shachar, of Harvard Law School.

For example, an AI tool might draft an appeals letter that a patient thinks looks impressive. But because most patients aren’t medical experts, they may not recognize if the AI misstates medical information, derailing an appeal, she said.

“The challenge is, if the patient is the one driving the process, are they going to be able to properly supervise the AI?” she said.

Earlier this year, Mathew Evins learned just 48 hours before his scheduled back surgery that his insurer wouldn’t cover it. Evins, a 68-year-old public relations executive who lives in Florida, worked with his physician to appeal, but got nowhere. He used an AI chatbot to draft a letter to his insurer, but that failed, too.

On his son’s recommendation, Evins turned to Sheer Health. He said Sheer identified a coding error in his medical records and handled communications with his insurer. The surgery was approved about three weeks later.

“It’s unfortunate that the public health system is so broken that it needs a third party to intervene on the patient’s behalf,” Evins told Stateline. But he’s grateful the technology made it possible to get life-changing surgery.

“AI in and of itself isn’t an answer,” he said. “AI, when used by a professional that understands the issues and ramifications of a particular problem, that’s a different story. Then you’ve got an effective tool.”

Most experts and lawmakers agree a human is needed to keep the robots in check.

AI has made it possible for insurance companies to rapidly assess cases and make decisions about whether to authorize surgeries or cover certain medical care. But that ability to make lightning-fast determinations should be tempered with a human, Venkat said.

“It’s why we need government regulation and why we need to make sure we mandate an individualized assessment with a human decisionmaker.”

Witten said there are situations in which AI works well, such as when it sifts through an insurance policy — which is essentially a contract between the company and the consumer — and connects the dots between the policy’s coverage and a corresponding insurance claim.

But, he said, “there are complicated cases out there AI just can’t resolve.” That’s when a human is needed to review.

“I think there’s a huge opportunity for AI to improve the patient experience and overall provider experience,” Witten said. “Where I worry is when you have insurance companies or other players using AI to completely replace customer support and human interaction.”

Furthermore, a growing body of research has found AI can reinforce bias that’s found elsewhere in medicine, discriminating against women, ethnic and racial minorities, and those with public insurance.

“The conclusions from artificial intelligence can reinforce discriminatory patterns and violate privacy in ways that we have already legislated against,” Venkat said.

Stateline reporter Anna Claire Vollers can be reached at avollers@stateline.org.

This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

❌
❌