Normal view

There are new articles available, click to refresh the page.
Today — 18 January 2026Main stream

AI therapy chatbots draw new oversight as suicides raise alarm

18 January 2026 at 11:00
A young woman asks AI companion ChatGPT for help this month in New York City. States are pushing to prevent the use of artificially intelligent chatbots in mental health to try to protect vulnerable users.

A young woman asks AI companion ChatGPT for help this month in New York City. States are pushing to prevent the use of artificially intelligent chatbots in mental health to try to protect vulnerable users. (Photo by Shalina Chatlani/Stateline)

Editor’s note: If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. There is also an online chat at 988lifeline.org.

States are passing laws to prevent artificially intelligent chatbots, such as ChatGPT, from being able to offer mental health advice to young users, following a trend of people harming themselves after seeking therapy from the AI programs.

Chatbots might be able to offer resources, direct users to mental health practitioners or suggest coping strategies. But many mental health experts say that’s a fine line to walk, as vulnerable users in dire situations require care from a professional, someone who must adhere to laws and regulations around their practice.

“I have met some of the families who have really tragically lost their children following interactions that their kids had with chatbots that were designed, in some cases, to be extremely deceptive, if not manipulative, in encouraging kids to end their lives,” said Mitch Prinstein, senior science adviser at the American Psychological Association and an expert on technology and children’s mental health.

“So in such egregious situations, it’s clear that something’s not working right, and we need at least some guardrails to help in situations like that,” he said.

While chatbots have been around for decades, AI technology has become so sophisticated that users may feel like they’re talking to a human. The chatbots don’t have the capacity to offer true empathy or mental health advice like a licensed psychologist would, and they are by design agreeable — a potentially dangerous model for someone with suicidal ideations. Several young people have died by suicide following interactions with chatbots.

States have enacted a variety of laws to regulate the types of interactions chatbots can have with users. Illinois and Nevada have completely banned the use of AI for behavioral health. New York and Utah passed laws requiring chatbots to explicitly tell users that they are not human. New York’s law also directs chatbots to detect instances of potential self-harm and refer the user to crisis hotlines and other interventions.

More laws may be coming. California and Pennsylvania are among the states that might consider legislation to regulate AI therapy.

President Donald Trump has criticized state-by-state regulation of AI, saying it stymies innovation. In December, he signed an executive order that aims to support the United States’ “global AI dominance” by overriding state artificial intelligence laws and establishing a national framework.

Still, states are moving ahead. Before Trump’s executive order, Florida Republican Gov. Ron DeSantis last month proposed a “Citizen Bill of Rights For Artificial Intelligence” that, among many other things, would prohibit AI from being used for “licensed” therapy or mental health counseling and provide parental controls for minors who may be exposed to it.

“The rise of AI is the most significant economic and cultural shift occurring at the moment; denying the people the ability to channel these technologies in a productive way via self-government constitutes federal government overreach and lets technology companies run wild,” DeSantis wrote on social media platform X in November.

‘A false sense of intimacy’

At a U.S. Senate Judiciary Committee hearing last September, some parents shared their stories about their children’s deaths after ongoing interactions with an artificially intelligent chatbot.

Sewell Setzer III was 14 years old when he died by suicide in 2024 after becoming obsessed with a chatbot.

“Instead of preparing for high school milestones, Sewell spent his last months being manipulated and sexually groomed by chatbots designed by an AI company to seem human, to gain trust, and to keep children like him endlessly engaged by supplanting the actual human relationships in his life,” his mother, Megan Garcia, said during the hearing.

Another parent, Matthew Raine, testified about his son Adam, who died by suicide at age16 after talking for months with ChatGPT, a program owned by the company OpenAI.

“We’re convinced that Adam’s death was avoidable, and because we believe thousands of other teens who are using OpenAI could be in similar danger right now,” Raine said.

Prinstein, of the American Psychological Association, said that kids are especially vulnerable when it comes to AI chatbots.

“By agreeing with everything that kids say, it develops a false sense of intimacy and trust. That’s really concerning, because kids in particular are developing their brains. That approach is going to be unfairly attractive to kids in a way that may make them unable to use reason, judgment and restraints in the way that adults would likely use when interacting with a chatbot.”

The Federal Trade Commission in September launched an inquiry into seven companies making these AI-powered chatbots, questioning what efforts are in place to protect children.

​​“AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,” the FTC said in its order.

Companies such as OpenAI have responded by saying that they are working with mental health experts to make their products safer and to limit chances of self-harm among its users.

“Working with mental health experts who have real-world clinical experience, we’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate,” the company wrote in a statement last October.

Legislative efforts

With action at the federal level in limbo, efforts to regulate AI chatbots at the state level have had limited success.

Dr. John “Nick” Shumate, a psychiatrist at the Harvard University Beth Israel Deaconess Medical Center, and his colleagues reviewed legislation to regulate mental health-related artificial intelligence systems across all states between January 2022 and May 2025.

The review found 143 bills directly or indirectly related to AI and mental health regulation. As of May 2025, 11 states had enacted 20 laws that researchers found were meaningful, direct and explicit in the ways they attempted to regulate mental health interactions.

They concluded that legislative efforts tended to fall into four different buckets: professional oversight, harm prevention, patient autonomy and data governance.

“You saw safety laws for chatbots and companion AIs, especially around self-harm and suicide response,” Shumate said in an interview.

New York enacted one such law last year that requires AI chatbots to remind users every three hours that it is not a human. The law also requires the chatbot to detect the potential of self-harm.

“There’s no denying that in this country, we’re in a mental health crisis,” New York Democratic state Sen. Kristen Gonzalez, the law’s sponsor, said in an interview. “But the solution shouldn’t be to replace human support from licensed professionals with untrained AI chatbots that can leak sensitive information and can lead to broad outcomes.”

In Virginia, Democratic Del. Michelle Maldonado is preparing legislation for this year’s session that would put limits on what chatbots can communicate to users in a therapeutic setting.

“The federal level has been slow to pass things, slow to even create legislative language around things. So we have had no choice but to fill in that gap,” said Maldonado, a former technology lawyer.

She noted that states have passed privacy laws and restrictions on nonconsensual intimate images, licensing requirements and disclosure agreements.

New York Democratic state Sen. Andrew Gounardes, who sponsored a law regulating AI transparency, said he’s seen the growing influence of AI companies at the state level.

And that is concerning to him, he said, as states try to take on AI companies for issues ranging from mental health to misinformation and beyond.

“They are hiring former staffers to become public affairs officers. They are hiring lobbyists who know legislators to kind of get in with them. They’re hosting events, you know, by the Capitol, at political conferences, to try to build goodwill,” Gounardes said.​​

“These are the wealthiest, richest, biggest companies in the world,” he said. “And so we have to really not let up our guard for a moment against that type of concentrated power, money and influence.”

Stateline reporter Shalina Chatlani can be reached at schatlani@stateline.org.

This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

Before yesterdayMain stream

Worried about surveillance, states enact privacy laws and restrict license plate readers

11 January 2026 at 16:00
A police officer uses the Flock Safety license plate reader system.

A police officer uses the Flock Safety license plate reader system. Many left-leaning states and cities are trying to protect their residents’ personal information amid the Trump administration's immigration crackdown, but a growing number of conservative lawmakers also want to curb the use of surveillance technologies. (Photo courtesy of Flock Safety)

As part of its deportation efforts, the Trump administration has ordered states to hand over personal data from voter rolls, driver’s license records and programs such as Medicaid and food stamps.

At the same time, the administration is trying to consolidate the bits of personal data held across federal agencies, creating a single trove of information on people who live in the United States.

Many left-leaning states and cities are trying to protect their residents’ personal information amid the immigration crackdown. But a growing number of conservative lawmakers also want to curb the use of surveillance technologies, such as automated license plate readers, that can be used to identify and track people.

Conservative-led states such as Arkansas, Idaho and Montana enacted laws last year designed to protect the personal data collected through license plate readers and other means. They joined at least five left-leaning states — Illinois, Massachusetts, Minnesota, New York and Washington — that specifically blocked U.S. Immigration and Customs Enforcement from accessing their driver’s license records.

In addition, Democratic-led cities in Colorado, Illinois, Massachusetts, New York, North Carolina, Texas and Washington last year terminated their contracts with Flock Safety, the largest provider of license plate readers in the U.S.

The Trump administration’s goal is to create a “surveillance dragnet across the country,” said William Owen, communications director at the Surveillance Technology Oversight Project, a nonprofit that advocates for stronger privacy laws.

We're entering an increasingly dystopian era of high-tech surveillance.

– William Owen, communications director at the Surveillance Technology Oversight Project

“We’re entering an increasingly dystopian era of high-tech surveillance,” Owen said. Intelligence sharing between various levels of government, he said, has “allowed ICE to sidestep sanctuary laws and co-opt local police databases and surveillance tools, including license plate readers, facial recognition and other technologies.”

A new Montana law bars government entities from accessing electronic communications and related material without a warrant. Republican state Sen. Daniel Emrich, the law’s author, said “the most important thing that our entire justice system is based on is the principle against unlawful search and seizure” — the right enshrined in the Fourth Amendment to the U.S. Constitution.

“It’s tough to find individuals who are constitutionally grounded and understand the necessity of keeping the Fourth Amendment rights intact at all times for all reasons — with minimal or zero exceptions,” Emrich said in an interview.

ICE did not respond to Stateline’s requests for comment.

Automated license plate readers

Recently, cities and states have grown particularly concerned over the use of automated license plate readers (ALPRs), which are high-speed camera and computer systems that capture license plate information on vehicles that drive by. These readers sit on top of police cars and streetlights or can be hidden within construction barrels and utility poles.

Some cameras collect data that gets stored in databases for years, raising concerns among privacy advocates. One report from the Brennan Center for Justice, a progressive think tank at New York University, found the data can be susceptible to hacking. Different agencies have varying policies on how long they keep the data, according to the International Association of Chiefs of Police, a law enforcement advocacy group.

Supporters of the technology, including many in law enforcement, say the technology is a powerful tool for tracking down criminal suspects.

Flock Safety says it has cameras in more than 5,000 communities and is connected to more than 4,800 law enforcement agencies across 49 states. The company claims its cameras conduct more than 20 billion license plate reads a month. It collects the data and gives it to police departments, which use the information to locate people.

Holly Beilin, a spokesperson for Flock Safety, told Stateline that while there are local police agencies that may be working with ICE, the company does not have a contractual relationship with the agency. Beilin also said that many liberal and even sanctuary cities continue to sign contracts with Flock Safety. She noted that the cameras have been used to solve some high-profile crimes, including identifying and leading police to the man who committed the Brown University shooting and killed an MIT professor at the end of last year.

“Agencies and cities are very much able to use this technology in a way that complies with their values. So they do not have to share data out of state,” Beilin said.

Pushback over data’s use

But critics, such as the American Civil Liberties Union, say that Flock Safety’s cameras are not only “giving even the smallest-town police chief access to an enormously powerful driver-surveillance tool,” but also that the data is being used by ICE. One news outlet, 404 Media, obtained records of these searches and found many were being carried out by local officers on behalf of ICE.

Last spring, the Denver City Council unanimously voted to terminate its contract with Flock Safety, but Democratic Mayor Mike Johnston unilaterally extended the contract in October, arguing that the technology was a useful crime-fighting tool.

The ACLU of Colorado has vehemently opposed the cameras, saying last August that audit logs from the Denver Police Department show more than 1,400 searches had been conducted for ICE since June 2024.

“The conversation has really gotten bigger because of the federal landscape and the focus, not only on immigrants and the functionality of ICE right now, but also on the side of really trying to reduce and or eliminate protections in regards to access to reproductive care and gender affirming care,” Anaya Robinson, public policy director at the ACLU of Colorado.

“When we erode rights and access for a particular community, it’s just a matter of time before that erosion starts to touch other communities.”

Jimmy Monto, a Democratic city councilor in Syracuse, New York, led the charge to eliminate Flock Safety’s contract in his city.

“Syracuse has a very large immigrant population, a very large new American population, refugees that have resettled and been resettled here. So it’s a very sensitive issue,” Monto said, adding that license plate readers allow anyone reviewing the data to determine someone’s immigration status without a warrant.

“When we sign a contract with someone who is collecting data on the citizens who live in a city, we have to be hyper-focused on exactly what they are doing while we’re also giving police departments the tools that they need to also solve homicides, right?” Monto said.

“Certainly, if license plate readers are helpful in that way, I think the scope is right. But we have to make sure that that’s what we’re using it for, and that the companies that we are contracting with are acting in good faith.”

Emrich, the Montana lawmaker, said everyone should be concerned about protecting constitutional privacy rights, regardless of their political views.

“If the government is obtaining data in violation of constitutional rights, they could be violating a whole slew of individuals’ constitutional rights in pursuit of the individuals who may or may not be protected under those same constitutional rights,” he said.

Stateline reporter Shalina Chatlani can be reached at schatlani@stateline.org.

This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

❌
❌