Reading view

There are new articles available, click to refresh the page.

AI vs. AI: Patients deploy bots to battle health insurers that deny care

As states continue to curb health insurers’ use of artificial intelligence, patients and doctors are arming themselves with AI tools to fight claims denials, prior authorizations and soaring medical bills. (Photo by Anna Claire Vollers/Stateline)

As states continue to curb health insurers’ use of artificial intelligence, patients and doctors are arming themselves with AI tools to fight claims denials, prior authorizations and soaring medical bills. (Photo by Anna Claire Vollers/Stateline)

As states strive to curb health insurers’ use of artificial intelligence, patients and doctors are arming themselves with AI tools to fight claims denials, prior authorizations and soaring medical bills.

Several businesses and nonprofits have launched AI-powered tools to help patients get their insurance claims paid and navigate byzantine medical bills, creating a robotic tug-of-war over who gets care and who foots the bill for it.

Sheer Health, a three-year-old company that helps patients and providers navigate health insurance and billing, now has an app that allows consumers to connect their health insurance account, upload medical bills and claims, and ask questions about deductibles, copays and covered benefits.

“You would think there would be some sort of technology that could explain in real English why I’m getting a bill for $1,500,” said cofounder Jeff Witten. The program uses both AI and humans to provide the answers for free, he said. Patients who want extra support in challenging a denied claim or dealing with out-of-network reimbursements can pay Sheer Health to handle those for them.

In North Carolina, the nonprofit Counterforce Health designed an AI assistant to help patients appeal their denied health insurance claims and fight large medical bills. The free service uses AI models to analyze a patient’s denial letter, then look through the patient’s policy and outside medical research to draft a customized appeal letter.

Other consumer-focused services use AI to catch billing errors or parse medical jargon. Some patients are even turning to AI chatbots like Grok for help.

A quarter of adults under age 30 said they used an AI chatbot at least once a month for health information or advice, according to a poll the health care research nonprofit KFF published in August 2024. But most adults said they were not confident that the health information is accurate.

State legislators on both sides of the aisle, meanwhile, are scrambling to keep pace, passing new regulations that govern how insurers, physicians and others use AI in health care. Already this year, more than a dozen states have passed laws regulating AI in health care, according to Manatt, a consulting firm.

“It doesn’t feel like a satisfying outcome to just have two robots argue back and forth over whether a patient should access a particular type of care,” said Carmel Shachar, assistant clinical professor of law and the faculty director of the Health Law and Policy Clinic at Harvard Law School.

“We don’t want to get on an AI-enabled treadmill that just speeds up.”

A black box

Health care can feel like a black box. If your doctor says you need surgery, for example, the cost depends on a dizzying number of factors, including your health insurance provider, your specific health plan, its copayment requirements, your deductible, where you live, the facility where the surgery will be performed, whether that facility and your doctor are in-network and your specific diagnosis.

Some insurers may require prior authorization before a surgery is approved. That can entail extensive medical documentation. After a surgery, the resulting bill can be difficult to parse.

Witten, of Sheer Health, said his company has seen thousands of instances of patients whose doctors recommend a certain procedure, like surgery, and then a few days before the surgery the patient learns insurance didn’t approve it.

You would think there would be some sort of technology that could explain in real English why I’m getting a bill for $1,500.

– Sheer Health co-founder Jeff Witten

In recent years, as more health insurance companies have turned to AI to automate claims processing and prior authorizations, the share of denied claims has risen. This year, 41% of physicians and other providers said their claims are denied more than 10% of the time, up from 30% of providers who said that three years ago, according to a September report from credit reporting company Experian.

Insurers on Affordable Care Act marketplaces denied nearly 1 in 5 in-network claims in 2023, up from 17% in 2021, and more than a third of out-of-network claims, according to the most recently available data from KFF.

Insurance giant UnitedHealth Group has come under fire in the media and from federal lawmakers for using algorithms to systematically deny care to seniors, while Humana and other insurers face lawsuits and regulatory investigations that allege they’ve used sophisticated algorithms to block or deny coverage for medical procedures.

Insurers say AI tools can improve efficiency and reduce costs by automating tasks that can involve analyzing vast amounts of data. And companies say they’re monitoring their AI to identify potential problems. A UnitedHealth representative pointed Stateline to the company’s AI Review Board, a team of clinicians, scientists and other experts that reviews its AI models for accuracy and fairness.

“Health plans are committed to responsibly using artificial intelligence to create a more seamless, real-time customer experience and to make claims management faster and more effective for patients and providers,” a spokesperson for America’s Health Insurance Plans, the national trade group representing health insurers, told Stateline.

But states are stepping up oversight.

Arizona, Maryland, Nebraska and Texas, for example, have banned insurance companies from using AI as the sole decisionmaker in prior authorization or medical necessity denials.

Dr. Arvind Venkat is an emergency room physician in the Pittsburgh area. He’s also a Democratic Pennsylvania state representative and the lead sponsor of a bipartisan bill to regulate the use of AI in health care.

He’s seen new technologies reshape health care during his 25 years in medicine, but AI feels wholly different, he said. It’s an “active player” in people’s care in a way that other technologies haven’t been.

“If we’re able to harness this technology to improve the delivery and efficiency of clinical care, that is a huge win,” said Venkat. But he’s worried about AI use without guardrails.

His legislation would force insurers and health care providers in Pennsylvania to be more transparent about how they use AI; require a human to make the final decision any time AI is used; and mandate that they show evidence of minimizing bias in their use of AI.

“In health care, where it’s so personal and the stakes are so high, we need to make sure we’re mandating in every patient’s case that we’re applying artificial intelligence in a way that looks at the individual patient,” Venkat said.

Patient supervision

Historically, consumers rarely challenge denied claims: A KFF analysis found fewer than 1% of health coverage denials are appealed. And even when they are, patients lose more than half of those appeals.

New consumer-focused AI tools could shift that dynamic by making appeals easier to file and the process easier to understand. But there are limits; without human oversight, experts say, the AI is vulnerable to mistakes.

“It can be difficult for a layperson to understand when AI is doing good work and when it is hallucinating or giving something that isn’t quite accurate,” said Shachar, of Harvard Law School.

For example, an AI tool might draft an appeals letter that a patient thinks looks impressive. But because most patients aren’t medical experts, they may not recognize if the AI misstates medical information, derailing an appeal, she said.

“The challenge is, if the patient is the one driving the process, are they going to be able to properly supervise the AI?” she said.

Earlier this year, Mathew Evins learned just 48 hours before his scheduled back surgery that his insurer wouldn’t cover it. Evins, a 68-year-old public relations executive who lives in Florida, worked with his physician to appeal, but got nowhere. He used an AI chatbot to draft a letter to his insurer, but that failed, too.

On his son’s recommendation, Evins turned to Sheer Health. He said Sheer identified a coding error in his medical records and handled communications with his insurer. The surgery was approved about three weeks later.

“It’s unfortunate that the public health system is so broken that it needs a third party to intervene on the patient’s behalf,” Evins told Stateline. But he’s grateful the technology made it possible to get life-changing surgery.

“AI in and of itself isn’t an answer,” he said. “AI, when used by a professional that understands the issues and ramifications of a particular problem, that’s a different story. Then you’ve got an effective tool.”

Most experts and lawmakers agree a human is needed to keep the robots in check.

AI has made it possible for insurance companies to rapidly assess cases and make decisions about whether to authorize surgeries or cover certain medical care. But that ability to make lightning-fast determinations should be tempered with a human, Venkat said.

“It’s why we need government regulation and why we need to make sure we mandate an individualized assessment with a human decisionmaker.”

Witten said there are situations in which AI works well, such as when it sifts through an insurance policy — which is essentially a contract between the company and the consumer — and connects the dots between the policy’s coverage and a corresponding insurance claim.

But, he said, “there are complicated cases out there AI just can’t resolve.” That’s when a human is needed to review.

“I think there’s a huge opportunity for AI to improve the patient experience and overall provider experience,” Witten said. “Where I worry is when you have insurance companies or other players using AI to completely replace customer support and human interaction.”

Furthermore, a growing body of research has found AI can reinforce bias that’s found elsewhere in medicine, discriminating against women, ethnic and racial minorities, and those with public insurance.

“The conclusions from artificial intelligence can reinforce discriminatory patterns and violate privacy in ways that we have already legislated against,” Venkat said.

Stateline reporter Anna Claire Vollers can be reached at avollers@stateline.org.

This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience

There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.

In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.

Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.

In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.

In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.

The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.

The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.

“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”

The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.

Power boost

The team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.

“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”

The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.

“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”

While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.

Attack mode

In their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.

For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices. 

The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.

If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.

The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.

“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”

In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.

“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”

This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative.

© Credit: Courtesy of the researchers

An example of the different types of IoT devices, physical objects that contain sensors and software that connect to the internet, that are coordinated to increase power grid resilience.
❌