Reading view

There are new articles available, click to refresh the page.

Changes made to AI moratorium amid bill’s ‘vote-a-rama’

Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Editor’s Note: This story has been updated to reflect the fact that Tennessee Sen. Marsha Blackburn backed off her own proposal late on Monday.

Senate Republicans are aiming to soften a proposed 10-year moratorium on state-level artificial intelligence laws that has received pushback from congressmembers on both sides of the aisle.

Sen. Marsha Blackburn of Tennessee and Sen. Ted Cruz of Texas developed a pared down version of the moratorium Sunday that shortens the time of the ban, and makes exceptions for some laws with specific aims such as protecting children or limiting deepfake technologies.

The ban is part of the quickly evolving megabill that Republicans are aiming to pass by July 4.  The Senate parliamentarian ruled Friday that a narrower version of the moratorium could remain, but the proposed changes enact a pause — banning states from regulating AI if they want access to the $500 million in AI infrastructure and broadband funding included in the bill.

The compromise amendment brings the state-level AI ban to five years instead of 10, and carves out room for specific laws that address rules on child online safety and protecting against unauthorized generative images of a person’s likeliness, often called deepfakes. The drafted amendment, obtained and published by Politico Sunday, still bans laws that aim to regulate AI models and decisionmaking systems.

Blackburn has been vocal against the rigidity of the original 10-year moratorium, and recently reintroduced a bill called the Kids Online Safety Act, alongside Connecticut Democrat Sen. Richard Blumenthal, Senate Majority Leader John Thune of South Dakota and Senate Minority Leader Chuck Schumer of New York. The bill would require tech companies to take steps to prevent potentially harmful material, like posts about eating disorders and instances of online bullying, from impacting children.

Blackburn said in a statement Sunday that she was “pleased” that Cruz agreed to update the provisions to exclude laws that “protect kids, creators, and other vulnerable individuals from the unintended consequences of AI.” This proposed version of the amendment would allow her state’s ELVIS Act, which prohibits people from using AI to mimic a person’s voice in the music industry without their permission, to continue to be enforced.

Late Monday, however, Blackburn backed off her own amendment, saying the language was “unacceptable” because it did not go as far as the Kids Online Safety Act in allowing states to protect children from potential harms of AI. Her move left the fate of the compromise measure in doubt as the Senate continued to debate the large tax bill to which it was attached.

Though introduced by Senate Republicans, the AI moratorium was losing favor of GOP congressmembers and state officials.

Senators Josh Hawley of Missouri, Jerry Moran of Kansas and Ron Johnson of Wisconsin were expected to vote against the moratorium, and Georgia Rep. Marjorie Taylor Greene said during a congressional hearing in June that she had changed her mind, after initially voting for the amendment.

“I support AI in many different faculties,” she said during the June 5 House Oversight Committee hearing. “However, I think that at this time, as our generation is very much responsible, not only here in Congress, but leaders in tech industry and leaders in states and all around the world have an incredible responsibility of the future and development regulation and laws of AI.”

On Friday, a group of 17 Republican governors wrote in a letter to Thune and Speaker Mike Johnson, asking them to remove the ban from the megabill.

“While the legislation overall is very strong, there is one small portion of it that threatens to undo all the work states have done to protect our citizens from the misuse of artificial intelligence,” the governors wrote. “We are writing to encourage congressional leadership to strip this provision from the bill before it goes to President Trump’s desk for his signature.”

Alexandra Reeve Givens, President and CEO of tech policy organization Center for Democracy and Technology said in a statement Monday that all versions of the AI moratorium would hurt state’s abilities to protect people from “potentially devastating AI harms.”

“Despite the multiple revisions of this policy, it’s clear that its drafters are not considering the moratorium’s full implications,” Reeve Givens said. “Congress should abandon this attempt to stifle the efforts of state and local officials who are grappling with the implications of this rapidly developing technology, and should stop abdicating its own responsibility to protect the American people from the real harms that these systems have been shown to cause.”

The updated language proposed by Blackburn and Cruz isn’t expected to be a standalone amendment to the reconciliation bill, Politico reported, rather part of a broader amendment of changes as the Senate continues their “vote-a-rama” on the bill this week. 

European Union AI regulation is both model and warning for U.S. lawmakers, experts say

Members of the group Initiative Urheberrecht (authors' rights initiative) demonstrate to demand regulation of artificial intelligence on June 16, 2023 in Berlin, Germany. The AI regulation later adopted by the European Union is a model for many U.S. lawmakers interested in consumer protection but a cautionary tale for others who say they're interested in robust innovation, experts say. (Photo by Sean Gallup/Getty Images)

Members of the group Initiative Urheberrecht (authors' rights initiative) demonstrate to demand regulation of artificial intelligence on June 16, 2023 in Berlin, Germany. The AI regulation later adopted by the European Union is a model for many U.S. lawmakers interested in consumer protection but a cautionary tale for others who say they're interested in robust innovation, experts say. (Photo by Sean Gallup/Getty Images)

The European Union’s landmark AI Act, which went into effect last year, stands as inspiration for some U.S. legislators looking to enact widespread consumer protections. Others use it as a cautionary tale warning against overregulation leading to a less competitive digital economy.

The European Union enacted its law to prevent what is currently happening in the U.S. — a patchwork of AI legislation throughout the states — said Sean Heather, senior vice president for international regulatory affairs and antitrust at the Chamber of Commerce during an exploratory congressional subcommittee hearing on May 21.

“America’s AI innovators risk getting squeezed between the so-called Brussels Effect of overzealous European regulation and the so-called Sacramento Effect of excessive state and local mandates,” said Adam Thierer, a Senior Fellow at think tank R Street Institute, at the hearing.

The EU’s AI Act is comprehensive, and puts regulatory responsibility on developers of AI to mitigate risk of harm by the systems. It also requires developers to provide technical documentation and training summaries of its models for review by EU officials. The U.S. adopting similar policies would kick the country out of its first-place position in the Global AI race, Thierer testified.

The “Brussels Effect,” Thierer mentioned, is the idea that the EU’s regulations will influence the global market. But not much of the world has followed suit — so far Canada, Brazil and Peru are working on similar laws, but the UK and countries like Australia, New Zealand, Switzerland, Singapore, and Japan have taken a less restrictive approach.

When Jeff Le, founder of tech policy consultancy 100 Mile Strategies LLC, talks to lawmakers on each side of the aisle, he said he hears that they don’t want another country’s laws deciding American rules.

“Maybe there’s a place for it in our regulatory debate,” Le said. “But I think the point here is American constituents should be overseen by American rules, and absent those rules, it’s very complicated.”

Does the EU AI act keep Europe from competing?

Critics of the AI Act say the language is overly broad, which slows down the development of AI systems as they aim to meet regulatory requirements. France and Germany rank in the top 10 global AI leaders, and China is second, according to Stanford’s AI Index, but the U.S. currently leads by a wide margin in the number of leading AI models and its AI research, experts testified before the congressional committee.

University of Houston Law Center professor Peter Salib said he believes the EU’s AI Act is a factor — but not the only one — in keeping European countries out of the top spots. First, the law has only been in effect for about nine months, which wouldn’t be long enough to make as much of an impact on Europe’s ability to participate in the global AI economy, he said.

Secondly, the EU AI act is one piece of the overall attitude about digital protection in Europe, Salib said. The General Data Protection Regulation, a law that went into effect in 2018 and gives individuals control over their personal information, follows a similar strict regulatory mindset.

“It’s part of a much longer-term trend in Europe that prioritizes things like privacy and transparency really, really highly,” Salib said. “Which is, for Europeans, good  — if that’s what they want, but it does seem to have serious costs in terms of where innovation happens.”

Stavros Gadinis, a professor at the Berkeley Center for Law and Business who has worked in the U.S. and Europe, said he thinks most of the concerns around innovation in the EU are outside the AI Act. Their tech labor market isn’t as robust as the U.S., and it can’t compete with the major financing accessible by Silicon Valley and Chinese companies, he said.

“That is what’s keeping them, more than this regulation,” Gadinis said. “That and, the law hasn’t really had the chance to have teeth yet.”

During the May 21 hearing, Rep. Lori Trahan, a Democrat from Massachusetts, called the Republican’s stance — that any AI regulation would kill tech startups and growing companies — “a false choice.”

The U.S. heavily invests in science and innovation, has founder-friendly immigration policies, has lenient bankruptcy laws and a “cultural tolerance for risk taking.” All policies the EU does not offer, Trahan said.

“It is therefore false and disingenuous to blame EU’s tech regulation for its low number of major tech firms,” Trahan said. “The story is much more complicated, but just as the EU may have something to learn from United States innovation policy, we’d be wise to study their approach to protecting consumers online.”

Self-governance

The EU’s law puts a lot of responsibility on developers of AI, and requires transparency, reporting, testing with third parties and tracking copyright. These are things that AI companies in the U.S. say they do already, Gadinis said.

“They all say that they do this to a certain extent,” he said. “But the question is, how expansive these efforts need to be, especially if you need to convince a regulator about it.”

AI companies in the U.S. currently self-govern, meaning they test their models for some of the societal and cybersecurity risks currently outlined by many lawmakers. But there’s no universal standard — what one company deems safe may be seen as risky to another, Gadinis said. Universal regulations would create a baseline for introducing new models and features, he said.

Even one company’s safety testing may look different from one year to the next. Until 2024, OpenAI’s CEO Sam Altman was pro-federal AI regulation, and sat on the company’s Safety and Security Committee, which regularly evaluates OpenAI’s processes and safeguards over a 90-day period.

In September, he left the committee, and has since become vocal against federal AI legislation. OpenAI’s safety committee has since been operating as an independent entity, Time reported. The committee recently published recommendations to enhance security measures, be more transparent about OpenAI’s work and “unify the company’s safety frameworks.”

Even though Altman has changed his tune on federal regulation, the mission of OpenAI is focused on the benefits society gains from AI — “They wanted to create [artificial general intelligence] that would benefit humanity instead of destroying it,” Salib said.

AI company Anthropic, maker of chatbot Claude, was formed by former staff members of OpenAI in 2021, and focuses on responsible AI development. Google, Microsoft and Meta are other top American AI companies that have some form of self safety testing, and were recently assessed by the AI Safety Project.

The project asked experts to weigh in on the strategies each company took for risk assessment, current harms, safety frameworks, existential safety strategy, governance and accountability, and transparency and communication. Anthropic scored the highest, but all companies were lacking in their “existential safety,” or the harm AI models could cause to society if unchanged. 

Just by developing these internal policies, most AI leaders are acknowledging the need for some form of safeguards, Salib said.

“I don’t want to say there’s wide industry agreement, because some seem to have changed their tunes last summer,” Salib said. “But there’s at least a lot of evidence that this is serious and worthwhile thinking about.”

What could the U.S. gain from EU’s practices?

Salib said he believes a law like the EU AI Act in the U.S. would be too “overly comprehensive.”

Many laws addressing AI concerns now, like discrimination by algorithms or self-driving cars, could be governed by existing laws — “It’s not clear to me that we need special AI laws for these things.”

But he said that the specific, case-by-case legislation that the states have been passing have been effective in targeting harmful AI actions, and ensuring compliance from AI companies.

Gadinis said he’s not sure why Congress is opposed to the state-by-state legislative model, as most of the state laws are consumer oriented, and very specific — like deciding how a state may use AI in education, preventing discrimination in healthcare data or keeping children away from sexually explicit AI content.

“I wouldn’t consider these particularly controversial, right?” Gadinis said. “I don’t think the big AI companies would actually want to be associated with problems in that area.”

Gadinis said the EU’s AI Act originally mirrored this specific, case-by-case approach, addressing AI considerations around sexual images, minors, consumer fraud and use of consumer data. But when ChatGPT was released in 2022, EU lawmakers went back to the drawing board and added the component about large language models, systematic risk, high-risk strategies and training, which made the reach of who needed to comply much wider.

After 10 months living with the law, the European Commission said this month it is open to “simplify the implementation” to make it easier for companies to comply.

It’s unlikely the U.S. will end up with AI regulations as comprehensive as the EU, Gadinis and Salib said. President Trump’s administration has taken a deregulated approach to tech so far, and Republicans passed a 10-year moratorium on state-level AI laws in the “big, beautiful bill” heading to the Senate consideration. 

Gadinis predicts that the federal government won’t take much action at all to regulate AI, but mounting pressure from the public may result in an industry self-regulatory body. This is where he believes the EU will be most influential — they have leaned on public-private partnerships to develop a strategy.

“Most of the action is going to come either from the private sector itself — they will band together — or from what the EU is doing in getting experts together, trying to kind of come up with a sort of half industry, half government approach,” Gadinis said.

Congress begins considering first federal AI regulations

A House committee met this week to discuss possible federal AI legislation, and debated a pending measure to preempt states from enacting their own regulations. (Photo by Jennifer Shutt/States Newsroom)

A House committee met this week to discuss possible federal AI legislation, and debated a pending measure to preempt states from enacting their own regulations. (Photo by Jennifer Shutt/States Newsroom)

In one of the first major steps in discussing widespread regulations for artificial intelligence legislation at the federal level, members of the House subcommittee on Commerce, Manufacturing and Trade met Wednesday to discuss the United States’ place in the global AI race.

The hearing took place amid a push from House Republicans to put a stop to state-level AI legislation for the next decade. The measure was advanced last week as a part of the House Energy & Commerce Committee’s budget reconciliation proposal, part of House Republicans “big, beautiful bill” aiming to cut hundreds of billions in government spending, including safety net programs, over the next decade.

“We’re here today to determine how Congress can support the growth of an industry that is key for American competitiveness and jobs without losing the race to write the global AI rule book,” said Florida Rep. Gus Bilirakis, a Republican and chairman of the Innovation, Data, and Commerce subcommittee.

In a two-and-a-half hour hearing, subcommittee members discussed how to keep America’s leadership in AI, the European Union’s landmark AI Act that went into effect last year, the growing patchwork of state laws on AI and the proposed moratorium on those laws.

Support for federal guidelines or regulation around AI technologies received bipartisan support in the last congress, and the Bipartisan House Task Force on Artificial Intelligence released its research and findings in December. But many Republicans who supported these efforts in the past are changing course, arguing that a moratorium on state laws could allow Congress the time to pass a unified, federal set of guidelines.

Rep. Jay Obernolte, a Republican from California, said the more than 1,000 state laws relating to AI that have been introduced this year have created urgency to pull together federal guidelines. The states currently have “creative agency” over AI regulations, he said.

“The states got out ahead of this. They feel a creative ownership over their frameworks, and they’re the ones that are preventing us from doing this now,” Obernolte said. “Which is an object lesson to us here of why we need a moratorium to prevent that from occurring.”

Critics of the moratorium questioned why legislation at the state level would prevent the creation of federal guidelines.

Rep. Kim Schrier, a Democrat from Washington, said that stripping the states’ ability to legislate AI without a federal framework first would be “Republicans’ big gift to big tech.” The moratorium on state AI laws proposes to stop any in-progress legislation and nullify existing legislation.

“This pattern of gifts and giveaways to big tech by the Trump administration, with the cooperation of Republicans in Congress, is hurting American consumers,” she said. “Instead, we should be learning from the work our state and local counterparts are doing now to deliver well-considered, robust legislation, giving American businesses the framework and resources they need to succeed while protecting consumers.”

House members opposing AI legislation often cited a lack of regulations for one of the reasons the United States currently leads the global AI marketplace. The U.S. ranks first, testified Marc Bhargava, director at global venture capital firm General Catalyst, though China follows closely behind in computing power and its AI models.

Sean Heather, senior vice president for international regulatory affairs and antitrust at the Chamber of Commerce, testified that legislation that too closely mirrors the European Union’s AI Act, which went into effect last summer, could bump the U.S. out of its top position. The EU’s AI Act is comprehensive, and puts regulatory responsibility on developers of AI to mitigate risk of harm by the systems. It also requires developers to provide technical documentation and training summaries.

The EU’s AI Act is one of the factors in why Europe is not a stronger player in AI, Bhargava said, but it’s not the only one. The U.S. has a history of investing in science and innovation, being founder-friendly to tech startups, and to immigrant founders, he said. 46% of Top Fortune 500 companies in 2024 were founded by immigrants, as well as 65% of top AI companies. Europe has not pursued these business-friendly policies, Bhargava said.

“The reason we’re ahead today is our startups. We have to think about how to continue to give them that edge, and giving them that edge means giving them guidelines, and not necessarily a framework, or patchwork of state regulations or over regulating,” Bhargava said. “We need to come up with that right balance.”  

AI companies in the U.S. currently self-govern, meaning they test their models for some of the societal and cybersecurity risks that many lawmakers would like to see written into law. Most investors also follow their own strategy of due-diligence, Bhargava said. At General Catalyst, they assess data sets and training models as well as the output of the models. They also ask AI companies to identify the potential downstream implications that could come from their models.

Bhargava and a handful of members on the committee said they fear that overly strong regulations, especially ones that put regulatory burden on developers like in the EU, could squash the next great tech startups before they can get their footing.

But a lack of legislation all together puts Americans in a dangerous place, said Rep. Kathy Castor, a Democrat from Florida. She cited concerns about minors’ interactions with unregulated AI, like the case of one 14-year-old from her state who took his life after forming a close relationship with a chatbot, and another 14-year-old who was engaging in sexual conversations with a Meta chatbot.

“What the heck is Congress doing?” Castor said. “What are you doing to take the cops off the beat while states have acted to protect us?”

Amba Kak, co-executive director of the AI Now Institute, which studies the social implications of AI, said she is skeptical of allowing the industry to self-govern or for AI to grow unfettered. She said that during the hearing, members have asserted that existing agencies or general rules will protect Americans from the harms of AI.

“But if that was true, then we wouldn’t see the reckless proliferation of AI applications that are predicated on exploiting children in this way,” she said.

Though Congress is in the early stages of considering a federal framework, Bhargava said states passed their existing AI laws with “the best intentions” in mind.

“People want to protect consumers. They want to create frameworks,” he said. “And partially, it’s because the federal government has not stepped up to have a framework that we’re leaving it to the states to regulate.”

Bhargava “strongly” encouraged the members of the committee to work together on a bipartisan framework, and incorporate the findings of last year’s Bipartisan House Task Force.

“I really think that if we can turn this into policy and enact it on the federal level, rather than leaving it to the states,” Bhargava said. “It would be in the best interests of the startups that we represented.” 

U.S. House Republicans aim to ban state-level AI laws for 10 years

Republican Sen. Ted Cruz of Texas shakes hands with OpenAI CEO Sam Altman following a hearing of the Senate Committee on Commerce, Science and Transportation on Thursday, May 8, in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

Republican Sen. Ted Cruz of Texas shakes hands with OpenAI CEO Sam Altman following a hearing of the Senate Committee on Commerce, Science and Transportation on Thursday, May 8, in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

A footnote in a budget bill U.S. House Republicans are trying to pass before Memorial Day is the first major signal for how Congress may address artificial intelligence legislation, as they seek to create a moratorium on any AI laws at the state level for 10 years.

The measure, advanced Wednesday, May 14, as part of the House Energy & Commerce Committee’s budget reconciliation proposal, says a state may not enforce any law or regulation on AI models and systems, or automated decision-making systems in the next 10 years. Exceptions would include laws that “remove legal impediments to, or facilitate the deployment or operation of” AI systems.

“No one believes that AI should be unregulated,” said California Rep. Jay Obernolte, a Republican member of the Subcommittee on Communications and Technology, during a markup Wednesday. But he said he believes that responsibility should fall to Congress, not the states. 

The AI law moratorium was packaged with a budget line item proposing to spend $500 million modernizing federal IT programs with commercial AI systems through 2035.

This move by House Republicans is not really out of left field, said Travis Hall, director for State Engagement at tech policy and governance organization Center for Democracy and Technology. Many have been itching to create a preemptive federal law to supersede AI legislation in the states.

At a Senate Commerce Committee session earlier this month, Chairman Ted Cruz, a Texas Republican, said it was in his plans to create “a regulatory sandbox for AI” that would prevent state overregulation and promote the United States’ AI industry. OpenAI CEO Sam Altman, once open to AI regulations, testified that the country’s lack of regulation is what contributed to his success.

“I think it is no accident that that’s happening in America again and again and again, but we need to make sure that we build our systems and that we set our policy in a way where that continues to happen,” Altman said.  

As the language of the bill stands, Congress would prohibit enforcement of any existing laws on AI and decision-making systems, and nullify any potential laws that could be put forth over the next decade, Hall said. Though they discussed AI research last year, Congress has not put forward any guidelines or regulations on AI.

“I will say what feels very different and new about this particular provision … both in terms of conversations about artificial intelligence and in terms of other areas of tech and telecom policy, is the complete lack of any regulatory structure that would actually be preempting the state law,” Hall said.

States have been developing their own laws around AI and decision-making systems — software that helps analyze and sort data, commonly used for job applications, mortgage lending, banking and in other industries — over the last few years as they await federal legislation. At least 550 AI bills have been introduced across 45 states and Puerto Rico in 2025, the National Conference of State Legislatures reported.

Many of these state laws regulate how AI intertwines with data privacy, transparency and discrimination. Others regulate how children can access these tools, how they can be used in election processes and surround the concept of deepfakes, or computer-generated likenesses of real people.

While lawmakers from both sides of the aisle have called for federal AI legislation, Hall said he thinks industry pressure and President Donald Trump’s deregulated tech stance won’t allow Congress to effectively act on a preemptive law — “states are stepping into that vacuum themselves.”

On Friday, 40 state attorneys general signed a bipartisan letter to Congress opposing the limitation on state AI legislation. The letter urged Congress to develop a federal framework for AI governance for “high risk” systems that promotes transparency, testing and tool assessment, in addition to state legislation. The letter said existing laws were developed “over years through careful consideration and extensive stakeholder input from consumers, industry, and advocates.”

“In the face of Congressional inaction on the emergence of real-world harms raised by the use of AI, states are likely to be the forum for addressing such issues,” the letter said. “This bill would directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers.”  

A widesweeping AI bill in California was vetoed by Gov. Gavin Newsom last year, citing similar industry pressure. Senate Bill 1047 would have required safety testing of costly AI models to determine whether they would likely lead to mass death, endanger public infrastructure or enable severe cyberattacks.

Assemblymember Rebecca Bauer-Kahan, a Bay Area Democrat, has found more success with the Automated Decisions Safety Act this year, but said as a regulatory lawyer, she would favor having a federal approach.

“We don’t have a Congress that is going to do what our communities want, and so in the absence of their action, the states are stepping up,” she said.

The moratorium would kill the Automated Decisions Safety Act and nullify all of California’s AI legislation, as well as landmark laws like Colorado’s which will go into effect in February. State Rep. Brianna Titone, a sponsor of Colorado’s law, said people are hungry for some regulation.

“A 10 year moratorium of time is astronomical in terms of how quickly this technology is being developed,” she said in an email to States Newsroom. “To have a complete free-for-all on AI with no safeguards puts citizens at risk of situations we haven’t yet conceived of.”

Hall is skeptical that this provision will advance fully, saying he feels legislators will have a hard time trying to justify this moratorium in a budget bill relating to updating aging IT systems. But it’s a clear indication that the focus of this Congress is on deregulation, not accountability, he said.

“I do think that it’s unfortunate that the first statement coming out is one of abdication of responsibility,” Hall said, “as opposed to stepping up and doing the hard work of actually putting in place common sense and, like, actual protections for people that allows for innovation.”

❌