Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Trump’s AI Action Plan removes ‘red tape’ for AI developers and data centers, punishes states that act alone

24 July 2025 at 10:30
David Sacks, U.S. President Donald Trump's "AI and Crypto Czar", speaks to President Trump as he signs a series of executive orders in the Oval Office of the White House on Jan. 23, 2025 in Washington, D.C. Trump signed a range of executive orders pertaining to issues including crypto currency and artificial intelligence. (Photo by Anna Moneymaker/Getty Images)

David Sacks, U.S. President Donald Trump's "AI and Crypto Czar", speaks to President Trump as he signs a series of executive orders in the Oval Office of the White House on Jan. 23, 2025 in Washington, D.C. Trump signed a range of executive orders pertaining to issues including crypto currency and artificial intelligence. (Photo by Anna Moneymaker/Getty Images)

The Trump administration wants to greatly expand the development and use of advanced artificial intelligence, including rolling back environmental rules to spur building of power-thirsty data centers and punishing states that attempt to regulate AI on their own.

The administration’s action plan, called “Winning the AI Race: America’s AI Action Plan,” released on Wednesday, is a result of six months of research by tech advisors, after Trump removed President Joe Biden’s signature AI guardrails on his first day in office. The plan takes a hands-off approach to AI safeguards, and invests in getting more American workers to use AI in their daily lives.

“To win the AI race, the U.S. must lead in innovation, infrastructure, and global partnerships,” AI and Crypto Czar David Sacks said in a statement. “At the same time, we must center American workers and avoid Orwellian uses of AI. This Action Plan provides a roadmap for doing that.”

The action plan outlines three major pillars — accelerate AI innovation, build American AI infrastructure and lead in international AI diplomacy and security.

The Trump administration says that to accelerate AI in the U.S., it needs to “remove red tape,” around “onerous” AI regulations. The plan recommends the Office of Science and Technology Policy inquire with businesses and the public about federal regulations that hinder AI innovation, and suggests the federal government end funding to states “with burdensome AI regulations.”

The plan does say that these actions should not interfere with states’ ability to pass AI laws that are not “unduly restrictive,” despite unsuccessful attempts by Congressional Republicans to impose an AI moratorium for the states.

The plan also says that free speech should be prioritized in AI, saying models must be trained so that “truth, rather than social engineering agendas” are the focus of model outputs. The plan recommends that the Department of Commerce and National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, DEI and climate change.

The Trump administration also pushes for AI to be more widely adopted in government roles, manufacturing, science and in the Department of Defense, and proposes increased funding and regulatory sandboxes — separate trial spaces for AI to be developed — to do so.

To support the proposed increases in AI use, the plan outlines a streamlined permitting process for data centers, which includes lowering or dropping environmental regulations under the Clean Air Act, the Clean Water Act and others. It also proposes making federal lands available for data center construction, and a push that American products should be used in building the infrastructure.

The Action Plan warns of cybersecurity risks and potential exposure to adversarial threats, saying that the government must develop secure frontier AI systems with national security agencies and develop “AI compute control enforcement,” to ensure security in AI systems and with semiconductor chips. It encourages collaboration with “like-minded nations” working toward AI models with shared values, but says it will counter Chinese influence.

“These clear-cut policy goals set expectations for the Federal Government to ensure America sets the technological gold standard worldwide, and that the world continues to run on American technology,” Secretary of State and Acting National Security Advisor Marco Rubio said in a statement.

The policy goals outlined in the plan fall in line with the deregulatory attitude Trump took during his campaign, as he more closely aligned himself with Silicon Valley tech giants, many of whom turned Trump donors. The plan paves the way for continued unfettered growth of American AI models, and outlines the huge energy and computing power needed to keep up with those goals.

In an address at the “Winning the AI Race” Summit Wednesday evening, President Donald Trump called for a “single federal standard” regulating AI, not a state-by-state approach.

“You can’t have three or four states holding you up. You can’t have a state with standards that are so high that it’s going to hold you up,” Trump said. “You have to have a federal rule and regulation.”

The summit was hosted by the Hill & Valley Forum, a group of lawmakers and venture capitalists and the All‑In Podcast, which is co-hosted by AI Czar Sacks, 

In addition to discussing the AI action plan, Trump signed executive orders to fast track data center permitting, expand AI exports including chips, software and data storage, and one that prohibits the federal government from procuring AI that has “partisan bias or ideological agendas.”

He spoke about the need for the U.S. to stay ahead in the global AI race, saying that the technology brings the “potential for bad as well as for good,” but that wasn’t reason enough to “retreat” from technological advancement. The U.S. is entering a “golden age,” he said in his speech.

“It will be powered by American energy. It will be run on American technology improved by American artificial intelligence, and it will make America richer, stronger, greater, freer, and more powerful than ever before,” Trump said.

During the address, Trump addressed his evolving relationship with tech CEOs, calling out Amazon, Google, Microsoft for investing $320 billion in data centers and AI infrastructure this year.

“I didn’t like them so much my first term, when I was running, I wouldn’t say I was thrilled with them, but I’ve gotten to know them and like them,” Trump said. “And I think they got to like me, but I think they got to like my policies, maybe much more than me.”

Sam Altman, CEO of OpenAI — one of the tech giants that stands to flourish under the proposed policies — spoke Tuesday about the productivity and innovation potential that AI has unlocked. The growth of AI in the last five years has surprised even him, Altman said. But it also poses very real risks, he said, mentioning emotional attachment and overreliance on AI and foreign risks.

“Without a drop of malevolence from anyone, society can just veer in a sort of strange direction,” Altman said.

OpenAI CEO Sam Altman says AI has life-altering potential, both for good and ill

24 July 2025 at 10:15
OpenAI CEO Sam Altman shared his view of the promise and peril of advanced artificial intelligence at a Federal Reserve conference in Washington, D.C. on July 22, 2025. (Photo by Andrew Harnik/Getty Images)

OpenAI CEO Sam Altman shared his view of the promise and peril of advanced artificial intelligence at a Federal Reserve conference in Washington, D.C. on July 22, 2025. (Photo by Andrew Harnik/Getty Images)

For as much promise as artificial intelligence shows in making life better, OpenAI CEO Sam Altman is worried.

The tech leader who has done so much to develop AI and make it accessible to the public says the technology could have life-altering effects on nearly everything, particularly if deployed by the wrong hands.

There’s a possible world in which foreign adversaries could use AI to design a bio weapon to take down the power grid, or break into financial institutions and steal wealth from Americans, he said. It’s hard to imagine without superhuman intelligence, but it becomes “very possible,” with it, he said.

“Because we don’t have that, we can’t defend against it,” Altman said at a Federal Reserve conference this week in Washington, D.C. “We continue to like, flash the warning lights on this. I think the world is not taking us seriously. I don’t know what else we can do there, but it’s like, this is a very big thing coming.”

Altman joined the conference Tuesday to speak about AI’s role in the financial sector, but also spoke about how it is changing the workforce and innovation. The growth of AI in the last five years has surprised even him, Altman said.

He acknowledged real fear that the technology has potential to grow beyond the capabilities that humans prompt it for, but said the time and productivity savings have been undeniable. 

OpenAI’s most well-known product, ChatGPT, was released to the public in November 2022, and its current model, GPT-4o, has evolved. Last week, the company had a model that achieved “gold-level performance,” akin to operating as well as humans that are true experts in their field, Altman said.

Many have likened the introduction of AI to the invention of the internet, changing so much of our day-to-day lives and workplaces. But Altman instead compared it to the transistor, a foundational piece of hardware invented in the 1940s that allowed electricity to flow through devices.

“It changed what we were able to build. It became part of, kind of, everything pretty quickly,” Altman said. “And in the same way, I don’t think you’ll be talking about AI companies for very long, you will just expect products and services to use this technology.”

When prompted by the Federal Reserve’s Vice Chair for Supervision Michelle Bowman to predict how AI will continue to evolve the workforce, Altman said he couldn’t make specific predictions.

“There are cases where entire classes of jobs will go away,” Altman said. “There are entirely new classes of jobs that will come and largely, I think this will look somewhat like most of history, and that the tools people have to use their jobs will let them do more, achieve things in new ways.” 

One of the unexpected upsides to the rollout of GPT has been how much it is used by small businesses, Altman said. He shared a story of an Uber driver who told him he was using ChatGPT for legal consultations, customer support, marketing decisions and more.

“It was not like he was taking jobs from other people. His business just would have failed,” Altman said. “He couldn’t pay for the lawyers. He couldn’t pay for the customer support people.”

Altman said he was surprised that the financial industry was one of the first to begin integrating GPT models into their work because it is highly regulated, but some of their earliest enterprise partners have been financial institutions like Morgan Stanley. The company is now increasingly working with the government, which has its own standards and procurement process for AI, to roll out OpenAI services to its employees.

Altman acknowledged the risks AI poses in these regulated institutions, and with the models themselves. Financial services are facing a fraud problem, and AI is only making it worse — it’s easier than ever to fake voice or likeness authentication, Altman said.

AI decisionmaking in financial and other industries presents data privacy concerns and potential for discrimination. Altman said GPT’s model is “steerable,” in that you can tell it to not consider factors like race or sex in making a decision, and that much of the bias in AI comes from the humans themselves.

“I think AIs are dispassionate and unemotional,” Altman said. “And I think it’ll be possible for AI — correctly built — to be a significant de-biasing force in many industries, and I think that’s not what many people thought, including myself, with the way we used to do AI.”

As much as Altman touted GPT and other AI models’ ability to increase productivity and save humans time, he also spoke about his concerns.

He said that though it’s been greatly improved in more recent models, AI hallucinations, or models that produce inaccurate or made-up outputs, are possible. He also spoke of a newer concept called prompt injections, the idea that a model that has learned personal information can be tricked into telling a user something they shouldn’t know.

In addition to the threat of foreign adversaries using AI for harm, Altman said he has two other major concerns for the evolution of AI. It feels very unlikely, he said, but “loss of control,” or the idea that AI overpowers humans, is possible.

What concerns him the most is the idea that models could get so integrated into society and get so smart that humans become reliant on them without realizing.

“And even without a drop of malevolence from anyone, society can just veer in a sort of strange direction,” he said.

There are mild cases of this happening, Altman said, like young people overrelying on ChatGPT make emotional, life-altering decisions for them.

“We’re studying that. We’re trying to understand what to do about it,” Altman said. “Even if ChatGPT gives great advice, even if chatGPT gives way better advice than any human therapist, something about kind of collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.” 

AI data centers are using more power. Regular customers are footing the bill

21 July 2025 at 10:15
As power-hungry data centers proliferate, states are searching for ways to protect utility customers from the steep costs of upgrading the electrical grid, trying instead to shift the cost to AI-driven tech companies. (Dana DiFilippo/New Jersey Monitor)

As power-hungry data centers proliferate, states are searching for ways to protect utility customers from the steep costs of upgrading the electrical grid, trying instead to shift the cost to AI-driven tech companies. (Dana DiFilippo/New Jersey Monitor)

Regular energy consumers, not corporations, will bear the brunt of the increased costs of a boom in artificial intelligence that has contributed to a growth in data centers and a surge in power usage, recent research suggests.

Between 2024 and 2025, data center power usage accounted for $9 billion, or 174%, of increased power costs, a June report by Monitoring Analytics, an external market monitor for PJM Interconnection, found. PJM manages the electrical power grid and wholesale electric market for 13 states and Washington, D.C., and this spring, customers were told to expect roughly a $25 increase on their monthly electric bill starting June 1.

“The growth in data center load and the expected future growth in data center load are unique and unprecedented and uncertain and require a different approach than simply asserting that it is just supply and demand,” Monitoring Analytics’ report said.

Data centers house the physical infrastructure to power most of the computing we do today, but many AI models and the large AI companies that power them, like Amazon, Meta and Microsoft use vastly more energy than other kinds of computing. Training a single chatbot like ChatGPT uses about the same amount of energy as 100 homes over the course of a year, an AI founder told States Newsroom earlier this year.

The growth of data centers — and how much power they use — came on fast. A 2024 report by the Joint Legislative Audit and Review Commission in Virginia — known as a global hub for data centers — found that PJM forecasts it will use double the amount of average monthly energy in 2033 as it did in 2023. Without new data centers, energy use would only grow 15% by 2040, the report said.

As of July, the United States is home to more than 3,800 data centers, up from more than 3,600 in April. A majority of data centers are connected to the same electrical grids that power residential homes, commercial buildings and other structures.

“There are locational price differences, but data centers added anywhere in PJM have an effect on prices everywhere in PJM,” Joseph Bowring, president of Monitoring Analytics said.

Creeping costs

At least 36 states, both conservative and liberal, offer tax incentives to companies planning on building data centers in their states. But the increased costs that customers are experiencing have made some wonder if the projects are the economic wins they were touted as.

“I’m not convinced that boosting data centers, from a state policy perspective, is actually worth it,” said New Jersey State Sen. Andrew Zwicker, a Democrat and co-sponsor of a bill to separate data centers from regular power supply. “It doesn’t pay for a lot of permanent jobs.”

Energy cost has historically followed a socialized model, based on the idea that everyone benefits from reliable electricity, said Ari Peskoe, the director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program. Although some of the pricing model is based on your actual use, some costs like new power generation, transmission and infrastructure projects are spread across all customers.

Data centers’ rapid growth is “breaking” this tradition behind utility rates.

“These are cities, these data centers, in terms of how much electricity they use,” Peskoe said. “And it happens to be that these are the world’s wealthiest corporations behind these data centers, and it’s not clear how much local communities actually benefit from these data centers. Is there any justification for forcing everyone to pay for their energy use?”

This spring in Virginia, Dominion Energy filed a request with the State Corporation Commission to increase the rates it charges by an additional $10.50 on the monthly bill of an average resident and another $10.92 per month to pay for higher fuel costs, the Virginia Mercury reported.

Dominion, and another local supplier, recently filed a proposal to separate data centers into their own rate class to protect other customers, but the additional charges demonstrate the price increases that current contracts could pass on to customers.

In June, the Federal Energy Regulatory Commission convened a technical conference to assess the adequacy of PJM’s resources and those of other major power suppliers, like Midcontinent Independent System Operator, Inc., ISO New England Inc., New York Independent System Operator, Inc., California Independent System Operator Corporation (CAISO) and Southwest Power Pool (SPP).

The current supply of power by PJM is not adequate to meet the current and future demand from large data center loads, Monitoring Analytics asserts in a report following the conference.

“Customers are already bearing billions of dollars in higher costs as a direct result of existing and forecast data center load,” the report said.

Proposed changes

One of the often-proposed solutions to soften the increased cost of data centers is to require them to bring their own generation, meaning they’d contract with a developer to build a power plant that would be big enough to meet their own demand. Though there are other options, like co-location, which means putting some of the electrical demand on an outside source, total separation is the foremost solution Bowring presents in his reports.

“Data centers are unique in terms of their growth and impact on the grid, unique in the history of the grid, and therefore, we think that’s why we think data centers should be treated as a separate class,” Bowring said.

Some data centers are already voluntarily doing this. Constellation Energy, the owner of Three Mile Island nuclear plant in central Pennsylvania, struck a $16 billion deal with Microsoft to power the tech giant’s AI energy demand needs. 

But in some states, legislators are seeking to find a more binding solution.

New Jersey Sen. Bob Smith, a Democrat who chairs the Environment and Energy Committee, authored a bill this spring that would require new AI data centers in the state to supply their power from new, clean energy sources, if other states in the region enact similar measures.

“Seeing the large multinational trillion dollar companies, like Microsoft and Meta, be willing to do things like restart Three Mile Island is crazy, but shows you their desperation,” said co-sponsor Zwicker. “And so, okay, you want to come to New Jersey? Great, but you’re not going to put the basis (of the extra cost) on ratepayers.”

New Jersey House members launched a probe into PJM’s practices as the state buys its annual utilities from the supplier at auction this month. Its July 2024 auction saw electrical costs increase by more than 800%, which contributed to the skyrocketing bills that took effect June 1.

Residents are feeling it, Smith said, and he and his co-sponsors plan to use the summer to talk to the other states within PJM’s regional transmission organization (RTO).

“Everything we’re detecting so far is they’re just as angry — the other 13 entities in PJM — as us,” Smith told States Newsroom.

Smith said they’re discussing the possibility of joining or forming a different RTO.

“We’re in the shock and horror stage where these new prices are being included in these bills, and citizens are screaming in pain,” Smith said. “A solution that I filed in the bill, is the one that says, ‘AI data centers, you’re welcome in New Jersey, but bring your own clean electricity with them so they don’t impact the ratepayers.”

Utah enacted a law this year that allows “large load” customers like data centers to craft separate contracts with utilities, and a bill in Oregon, which would create a separate customer class for data centers, called the POWER Act, passed through both chambers last month.

If passed, New Jersey’s law would join others across the country in redefining the relationship between data centers powering AI and utilities providers.

“We have to take action, and I think we have to be pretty thoughtful about this, and look at the big picture as well,” Zwicker said. ”I’m not anti-data center, I’m pro-technology, but I’m just not willing to put it on the backs of ratepayers.” 

Senate votes 99-1 to remove AI moratorium from megabill

1 July 2025 at 18:47
Republican Sens. Ted Cruz of Texas and Marsha Blackburn of Tennessee, shown here in a June 17, 2025, committee hearing, proposed paring down the moratorium on state-based AI laws included in the budget bill, but the provision still proved unpopular. On Monday, Blackburn cosponsored an amendment to remove the measure. (Photo by Kayla Bartkowski/Getty Images)

Republican Sens. Ted Cruz of Texas and Marsha Blackburn of Tennessee, shown here in a June 17, 2025, committee hearing, proposed paring down the moratorium on state-based AI laws included in the budget bill, but the provision still proved unpopular. On Monday, Blackburn cosponsored an amendment to remove the measure. (Photo by Kayla Bartkowski/Getty Images)

A moratorium on state-based artificial intelligence laws was struck from the “Big Beautiful Bill” Monday night in a 99-1 vote in the U.S. Senate, after getting less and less popular with state and federal lawmakers, state officials and advocacy groups since it was introduced in May.

The moratorium had evolved in the seven weeks since it was introduced into the megabill. At an early May Senate Commerce Committee session, Sen. Ted Cruz of Texas said it was in his plans to create “a regulatory sandbox for AI” that would prevent state overregulation and promote the United States’ AI industry.

GOP senators initially proposed a 10-year ban on all state laws relating to artificial intelligence, saying the federal government should be the only legislative body to regulate the technology. Over several hearings, congressional members and expert witnesses debated the level of involvement the federal government should take in regulating AI. They discussed state’s rightssafety concerns for the technology and how other governmental bodies, like the European Union, are regulating AI.

Over the weekend, Sen. Marsha Blackburn of Tennessee and Cruz developed a pared down version of the moratorium that proposed a five-year ban, and made exceptions for some laws with specific aims such as protecting children or limiting deepfake technologies. Changes over the weekend also tied state’s ability to collect federal funding to expand broadband access to their willingness to nullify their existing AI laws.

Monday night, an amendment to remove the moratorium from the budget bill — cosponsored by Blackburn and Sen. Maria Cantwell, a Washington Democrat — was passed 99-1.

“The Senate came together tonight to say that we can’t just run over good state consumer protection laws,” Cantwell said in a statement. “States can fight robocalls, deepfakes and provide safe autonomous vehicle laws. This also allows us to work together nationally to provide a new federal framework on Artificial Intelligence that accelerates U.S. leadership in AI while still protecting consumers.” 

The “overwhelming” vote reflects how unpopular unregulated AI is among voters and legislators in both parties, said Alexandra Reeve Givens, president and CEO of the tech policy organization, Center for Democracy and Technology, in a statement.

“Americans deserve sensible guardrails as AI develops, and if Congress isn’t prepared to step up to the plate, it shouldn’t prevent states from addressing the challenge,” Reeve Givens said. “We hope that after such a resounding rebuke, Congressional leaders understand that it’s time for them to start treating AI harms with the seriousness they deserve.”

Changes made to AI moratorium amid bill’s ‘vote-a-rama’

1 July 2025 at 09:00
Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Editor’s Note: This story has been updated to reflect the fact that Tennessee Sen. Marsha Blackburn backed off her own proposal late on Monday.

Senate Republicans are aiming to soften a proposed 10-year moratorium on state-level artificial intelligence laws that has received pushback from congressmembers on both sides of the aisle.

Sen. Marsha Blackburn of Tennessee and Sen. Ted Cruz of Texas developed a pared down version of the moratorium Sunday that shortens the time of the ban, and makes exceptions for some laws with specific aims such as protecting children or limiting deepfake technologies.

The ban is part of the quickly evolving megabill that Republicans are aiming to pass by July 4.  The Senate parliamentarian ruled Friday that a narrower version of the moratorium could remain, but the proposed changes enact a pause — banning states from regulating AI if they want access to the $500 million in AI infrastructure and broadband funding included in the bill.

The compromise amendment brings the state-level AI ban to five years instead of 10, and carves out room for specific laws that address rules on child online safety and protecting against unauthorized generative images of a person’s likeliness, often called deepfakes. The drafted amendment, obtained and published by Politico Sunday, still bans laws that aim to regulate AI models and decisionmaking systems.

Blackburn has been vocal against the rigidity of the original 10-year moratorium, and recently reintroduced a bill called the Kids Online Safety Act, alongside Connecticut Democrat Sen. Richard Blumenthal, Senate Majority Leader John Thune of South Dakota and Senate Minority Leader Chuck Schumer of New York. The bill would require tech companies to take steps to prevent potentially harmful material, like posts about eating disorders and instances of online bullying, from impacting children.

Blackburn said in a statement Sunday that she was “pleased” that Cruz agreed to update the provisions to exclude laws that “protect kids, creators, and other vulnerable individuals from the unintended consequences of AI.” This proposed version of the amendment would allow her state’s ELVIS Act, which prohibits people from using AI to mimic a person’s voice in the music industry without their permission, to continue to be enforced.

Late Monday, however, Blackburn backed off her own amendment, saying the language was “unacceptable” because it did not go as far as the Kids Online Safety Act in allowing states to protect children from potential harms of AI. Her move left the fate of the compromise measure in doubt as the Senate continued to debate the large tax bill to which it was attached.

Though introduced by Senate Republicans, the AI moratorium was losing favor of GOP congressmembers and state officials.

Senators Josh Hawley of Missouri, Jerry Moran of Kansas and Ron Johnson of Wisconsin were expected to vote against the moratorium, and Georgia Rep. Marjorie Taylor Greene said during a congressional hearing in June that she had changed her mind, after initially voting for the amendment.

“I support AI in many different faculties,” she said during the June 5 House Oversight Committee hearing. “However, I think that at this time, as our generation is very much responsible, not only here in Congress, but leaders in tech industry and leaders in states and all around the world have an incredible responsibility of the future and development regulation and laws of AI.”

On Friday, a group of 17 Republican governors wrote in a letter to Thune and Speaker Mike Johnson, asking them to remove the ban from the megabill.

“While the legislation overall is very strong, there is one small portion of it that threatens to undo all the work states have done to protect our citizens from the misuse of artificial intelligence,” the governors wrote. “We are writing to encourage congressional leadership to strip this provision from the bill before it goes to President Trump’s desk for his signature.”

Alexandra Reeve Givens, President and CEO of tech policy organization Center for Democracy and Technology said in a statement Monday that all versions of the AI moratorium would hurt state’s abilities to protect people from “potentially devastating AI harms.”

“Despite the multiple revisions of this policy, it’s clear that its drafters are not considering the moratorium’s full implications,” Reeve Givens said. “Congress should abandon this attempt to stifle the efforts of state and local officials who are grappling with the implications of this rapidly developing technology, and should stop abdicating its own responsibility to protect the American people from the real harms that these systems have been shown to cause.”

The updated language proposed by Blackburn and Cruz isn’t expected to be a standalone amendment to the reconciliation bill, Politico reported, rather part of a broader amendment of changes as the Senate continues their “vote-a-rama” on the bill this week. 

❌
❌