Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Changes made to AI moratorium amid bill’s ‘vote-a-rama’

1 July 2025 at 09:00
Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Editor’s Note: This story has been updated to reflect the fact that Tennessee Sen. Marsha Blackburn backed off her own proposal late on Monday.

Senate Republicans are aiming to soften a proposed 10-year moratorium on state-level artificial intelligence laws that has received pushback from congressmembers on both sides of the aisle.

Sen. Marsha Blackburn of Tennessee and Sen. Ted Cruz of Texas developed a pared down version of the moratorium Sunday that shortens the time of the ban, and makes exceptions for some laws with specific aims such as protecting children or limiting deepfake technologies.

The ban is part of the quickly evolving megabill that Republicans are aiming to pass by July 4.  The Senate parliamentarian ruled Friday that a narrower version of the moratorium could remain, but the proposed changes enact a pause — banning states from regulating AI if they want access to the $500 million in AI infrastructure and broadband funding included in the bill.

The compromise amendment brings the state-level AI ban to five years instead of 10, and carves out room for specific laws that address rules on child online safety and protecting against unauthorized generative images of a person’s likeliness, often called deepfakes. The drafted amendment, obtained and published by Politico Sunday, still bans laws that aim to regulate AI models and decisionmaking systems.

Blackburn has been vocal against the rigidity of the original 10-year moratorium, and recently reintroduced a bill called the Kids Online Safety Act, alongside Connecticut Democrat Sen. Richard Blumenthal, Senate Majority Leader John Thune of South Dakota and Senate Minority Leader Chuck Schumer of New York. The bill would require tech companies to take steps to prevent potentially harmful material, like posts about eating disorders and instances of online bullying, from impacting children.

Blackburn said in a statement Sunday that she was “pleased” that Cruz agreed to update the provisions to exclude laws that “protect kids, creators, and other vulnerable individuals from the unintended consequences of AI.” This proposed version of the amendment would allow her state’s ELVIS Act, which prohibits people from using AI to mimic a person’s voice in the music industry without their permission, to continue to be enforced.

Late Monday, however, Blackburn backed off her own amendment, saying the language was “unacceptable” because it did not go as far as the Kids Online Safety Act in allowing states to protect children from potential harms of AI. Her move left the fate of the compromise measure in doubt as the Senate continued to debate the large tax bill to which it was attached.

Though introduced by Senate Republicans, the AI moratorium was losing favor of GOP congressmembers and state officials.

Senators Josh Hawley of Missouri, Jerry Moran of Kansas and Ron Johnson of Wisconsin were expected to vote against the moratorium, and Georgia Rep. Marjorie Taylor Greene said during a congressional hearing in June that she had changed her mind, after initially voting for the amendment.

“I support AI in many different faculties,” she said during the June 5 House Oversight Committee hearing. “However, I think that at this time, as our generation is very much responsible, not only here in Congress, but leaders in tech industry and leaders in states and all around the world have an incredible responsibility of the future and development regulation and laws of AI.”

On Friday, a group of 17 Republican governors wrote in a letter to Thune and Speaker Mike Johnson, asking them to remove the ban from the megabill.

“While the legislation overall is very strong, there is one small portion of it that threatens to undo all the work states have done to protect our citizens from the misuse of artificial intelligence,” the governors wrote. “We are writing to encourage congressional leadership to strip this provision from the bill before it goes to President Trump’s desk for his signature.”

Alexandra Reeve Givens, President and CEO of tech policy organization Center for Democracy and Technology said in a statement Monday that all versions of the AI moratorium would hurt state’s abilities to protect people from “potentially devastating AI harms.”

“Despite the multiple revisions of this policy, it’s clear that its drafters are not considering the moratorium’s full implications,” Reeve Givens said. “Congress should abandon this attempt to stifle the efforts of state and local officials who are grappling with the implications of this rapidly developing technology, and should stop abdicating its own responsibility to protect the American people from the real harms that these systems have been shown to cause.”

The updated language proposed by Blackburn and Cruz isn’t expected to be a standalone amendment to the reconciliation bill, Politico reported, rather part of a broader amendment of changes as the Senate continues their “vote-a-rama” on the bill this week. 

U.S. House Republicans aim to ban state-level AI laws for 10 years

19 May 2025 at 10:00
Republican Sen. Ted Cruz of Texas shakes hands with OpenAI CEO Sam Altman following a hearing of the Senate Committee on Commerce, Science and Transportation on Thursday, May 8, in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

Republican Sen. Ted Cruz of Texas shakes hands with OpenAI CEO Sam Altman following a hearing of the Senate Committee on Commerce, Science and Transportation on Thursday, May 8, in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

A footnote in a budget bill U.S. House Republicans are trying to pass before Memorial Day is the first major signal for how Congress may address artificial intelligence legislation, as they seek to create a moratorium on any AI laws at the state level for 10 years.

The measure, advanced Wednesday, May 14, as part of the House Energy & Commerce Committee’s budget reconciliation proposal, says a state may not enforce any law or regulation on AI models and systems, or automated decision-making systems in the next 10 years. Exceptions would include laws that “remove legal impediments to, or facilitate the deployment or operation of” AI systems.

“No one believes that AI should be unregulated,” said California Rep. Jay Obernolte, a Republican member of the Subcommittee on Communications and Technology, during a markup Wednesday. But he said he believes that responsibility should fall to Congress, not the states. 

The AI law moratorium was packaged with a budget line item proposing to spend $500 million modernizing federal IT programs with commercial AI systems through 2035.

This move by House Republicans is not really out of left field, said Travis Hall, director for State Engagement at tech policy and governance organization Center for Democracy and Technology. Many have been itching to create a preemptive federal law to supersede AI legislation in the states.

At a Senate Commerce Committee session earlier this month, Chairman Ted Cruz, a Texas Republican, said it was in his plans to create “a regulatory sandbox for AI” that would prevent state overregulation and promote the United States’ AI industry. OpenAI CEO Sam Altman, once open to AI regulations, testified that the country’s lack of regulation is what contributed to his success.

“I think it is no accident that that’s happening in America again and again and again, but we need to make sure that we build our systems and that we set our policy in a way where that continues to happen,” Altman said.  

As the language of the bill stands, Congress would prohibit enforcement of any existing laws on AI and decision-making systems, and nullify any potential laws that could be put forth over the next decade, Hall said. Though they discussed AI research last year, Congress has not put forward any guidelines or regulations on AI.

“I will say what feels very different and new about this particular provision … both in terms of conversations about artificial intelligence and in terms of other areas of tech and telecom policy, is the complete lack of any regulatory structure that would actually be preempting the state law,” Hall said.

States have been developing their own laws around AI and decision-making systems — software that helps analyze and sort data, commonly used for job applications, mortgage lending, banking and in other industries — over the last few years as they await federal legislation. At least 550 AI bills have been introduced across 45 states and Puerto Rico in 2025, the National Conference of State Legislatures reported.

Many of these state laws regulate how AI intertwines with data privacy, transparency and discrimination. Others regulate how children can access these tools, how they can be used in election processes and surround the concept of deepfakes, or computer-generated likenesses of real people.

While lawmakers from both sides of the aisle have called for federal AI legislation, Hall said he thinks industry pressure and President Donald Trump’s deregulated tech stance won’t allow Congress to effectively act on a preemptive law — “states are stepping into that vacuum themselves.”

On Friday, 40 state attorneys general signed a bipartisan letter to Congress opposing the limitation on state AI legislation. The letter urged Congress to develop a federal framework for AI governance for “high risk” systems that promotes transparency, testing and tool assessment, in addition to state legislation. The letter said existing laws were developed “over years through careful consideration and extensive stakeholder input from consumers, industry, and advocates.”

“In the face of Congressional inaction on the emergence of real-world harms raised by the use of AI, states are likely to be the forum for addressing such issues,” the letter said. “This bill would directly harm consumers, deprive them of rights currently held in many states, and prevent State AGs from fulfilling their mandate to protect consumers.”  

A widesweeping AI bill in California was vetoed by Gov. Gavin Newsom last year, citing similar industry pressure. Senate Bill 1047 would have required safety testing of costly AI models to determine whether they would likely lead to mass death, endanger public infrastructure or enable severe cyberattacks.

Assemblymember Rebecca Bauer-Kahan, a Bay Area Democrat, has found more success with the Automated Decisions Safety Act this year, but said as a regulatory lawyer, she would favor having a federal approach.

“We don’t have a Congress that is going to do what our communities want, and so in the absence of their action, the states are stepping up,” she said.

The moratorium would kill the Automated Decisions Safety Act and nullify all of California’s AI legislation, as well as landmark laws like Colorado’s which will go into effect in February. State Rep. Brianna Titone, a sponsor of Colorado’s law, said people are hungry for some regulation.

“A 10 year moratorium of time is astronomical in terms of how quickly this technology is being developed,” she said in an email to States Newsroom. “To have a complete free-for-all on AI with no safeguards puts citizens at risk of situations we haven’t yet conceived of.”

Hall is skeptical that this provision will advance fully, saying he feels legislators will have a hard time trying to justify this moratorium in a budget bill relating to updating aging IT systems. But it’s a clear indication that the focus of this Congress is on deregulation, not accountability, he said.

“I do think that it’s unfortunate that the first statement coming out is one of abdication of responsibility,” Hall said, “as opposed to stepping up and doing the hard work of actually putting in place common sense and, like, actual protections for people that allows for innovation.”

Farm Foundation Forum Underscores Need for Comprehensive Agricultural Labor Reform

2 December 2024 at 16:13

The November Farm Foundation® Forum, Growing Together: Trends and Transformation in U.S. Agriculture Labor, highlighted some of the findings from a recent multi-day symposium that explored the future of the U.S. agricultural workforce. The symposium, held by Farm Foundation and the Economic Research Service at the U.S. Department of Agriculture, aimed to convene a network of researchers and stakeholders to engage in productive discussions focused on farm labor issues. The primary goal was to strengthen and enhance ongoing farm labor research.

This forum highlighted the critical importance of farm labor to the competitiveness of US agriculture, particularly for labor-intensive commodities like fruits and vegetables. The discussion was moderated by Michael Marsh, president and CEO of the National Council of Agricultural Employers, and featured panelists: Philip Martin Professor Emeritus at the University of California, Davis; Andrew Padovani, senior research associate with JBS International; and Alexandra Hill, assistant professor at the University of California, Berkeley.

The Forum covered a wide range of topics, including wage rates and competition, legislative and regulatory challenges, litigation and legal actions, mechanization and labor alternatives, and economic and demographic trends.

Numerous Issues to Consider

One point brought up was that there has been no significant agricultural labor reform since 1986, making it difficult to address current labor issues. Farmers must also contend with many new regulations, including those related to wage rates and worker protection. The impact of the Adverse Effect Wage Rate and competition with countries like Mexico was also discussed.

One solution to rising labor costs is a push toward mechanization, which brings about its own set of questions around adaptation to this change. In some cases, robotic harvesters are not yet fast enough or inexpensive enough to replace human hand pickers, but the gap may be closing fastest for crops like apples.

The H2-A program was also a large part of the discussion. The use of H-2A workers is increasing, but the program’s costs and regulatory requirements are significant. The anticipated impacts of the incoming administration on the potential for ag labor reform was also briefly discussed during audience question and answer session.

Overall, the Forum underscored the urgent need for comprehensive agricultural labor reform to ensure the sustainability and competitiveness of US agriculture. The discussions highlighted the complex interplay of wage rates, regulatory challenges, and the need for mechanization and alternative labor sources.

The two-hour discussion, including the audience question and answer session, was recorded and is archived on the Farm Foundation website. 

The post Farm Foundation Forum Underscores Need for Comprehensive Agricultural Labor Reform appeared first on Farm Foundation.

❌
❌