Reading view

There are new articles available, click to refresh the page.

EU’s new AI code of practice could set regulatory standard for American companies

Some American companies have agreed to comply with new, voluntary AI standards from European Union regulators, in advance of new regulations set for 2027, but others have decried them as overreach. (Photo by Santiago Urquijo/Getty Images)

Some American companies have agreed to comply with new, voluntary AI standards from European Union regulators, in advance of new regulations set for 2027, but others have decried them as overreach. (Photo by Santiago Urquijo/Getty Images)

American companies are split between support and criticism of a new voluntary European AI code of practice, meant to help tech companies align themselves with upcoming regulations from the European Union’s landmark AI Act.

The voluntary code, called the General Purpose AI Code of Practice, which rolled out in July, is meant to help companies jump-start their compliance. Even non-European companies will be required to meet certain standards of transparency, safety, security and copyright compliance to operate in Europe come August 2027.  

Many tech giants have already signed the code of practice, including Amazon, Anthropic, OpenAI, Google, IBM, Microsoft, Mistral AI, Cohere and Fastweb. But others have refused.

In July, Meta’s Chief Global Affairs Officer Joel Kaplan said in a statement on Linkedin that the company would not commit.

“Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it,” he wrote. “This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Though Google’s President of Global Affairs Kent Walker was critical of the code of practice in a company statement, Google has signed it, he said.

“We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI,” Walker wrote. “In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe’s competitiveness.”

The divergent approach of U.S. and European regulators has showcased a clear difference in attitude about AI protections and development between the two markets, said Vivien Peaden, a tech and privacy attorney with Baker Donelson.

She compared the approaches to cars — Americans are known for fast, powerful vehicles, while European cars are stylish and eco-friendly.

“Some people will say, I’m really worried that this engine is too powerful. You could drive the car off a cliff, and there’s not much you can do but to press the brake and stop it, so I like the European way,” Peaden said. “My response is, ‘Europeans make their car their way, right? You can actually tell the difference. Why? Because it was designed with a different mindset.”

While the United States federal government has recently enacted some AI legislation through the Take It Down Act, which prohibits AI-generated nonconsensual depictions of individuals, it has not passed any comprehensive laws on how AI may operate. The Trump administration’s recent AI Action Plan paves a clear way for AI companies to continue to grow rapidly and unregulated.

But under the EU’s AI Act, tech giants like Amazon, Google and Meta will need to be more transparent about how their models are trained and operated, and follow rules for managing systemic risks if they’d like to operate in Europe.

“Currently, it’s still voluntary,” Peaden said. “But I do believe it’s going to be one of the most influential standards in AI’s industry.”

General Purpose AI Code of Practice

The EU AI Act was passed last year to mitigate risk created by AI models, and the law creates “strict obligations” for models that are considered “high risk.” High risk AI models are those that can pose serious risks to health, safety or fundamental rights when used for employment, education, biometric identification and law enforcement, the act said.

Some AI practices, including AI-based manipulation and deception, predictions of criminal offenses, social scoring, emotion recognition in workplaces and educational institutions and real-time biometric identification for law enforcement, are considered “unacceptable risk” and are banned from use in the EU altogether.

Some of these practices, like social scoring — using an algorithm to determine access to certain privileges or opportunities like mortgages or jail time — are widely used, and often unregulated in the United States.

While AI models that will be released after Aug. 2 already have to comply with the EU AI Act’s standards, large language models (LLMs) — the technical foundation of AI models — released before that date have through August 2027 to fully comply. The code of practice released last month offers a voluntary way for companies to get into compliance early, and with more leniency than when the 2027 deadline hits, it says.

The three chapters in the code of practice are transparency, copyright and safety, and security. The copyright requirements are likely where American and European companies are highly split, said Yelena Ambartsumian, founder of tech consultancy firm Ambart Law.

In order to train LLMs, you need a broad, high-quality dataset with good grammar, Ambartsumian said. Many American LLMs turn to pirated collections of books.

“So [American companies] made a bet that, instead of paying for this content, licensing it, which would cost billions of dollars, the bet was okay, ‘we’re going to develop these LLMs, and then we’ll deal with the fallout, the lawsuits later,” Ambartsumain said. “But at that point, we’ll be in a position where, because of our war chest, or because of our revenue, we’ll be able to deal with the fallout of this fair use litigation.”

And those bets largely worked out. In two recent lawsuits, Bartz v. Anthropic and Kadrey v. Meta, judges ruled in favor of the AI developers based on the “fair use” doctrine, which allows people to use copyrighted material without permission in certain journalistic or creative contexts. In AI developer Anthropic’s case, Judge William Alsup likened the training process to how a human might read, process, and later draw on a book’s themes to create new content.

But the EU’s copyright policy bans developers from training AI on pirated content and says companies must also comply with content owners’ requests to not use their works in their datasets. It also outlines rules about transparency with web crawlers, or how AI models rake through the internet for information. AI companies will also have to routinely update documentation about their AI tools and services for privacy and security.

Those subject to the requirements of the EU’s AI Act are general purpose AI models, nearly all of which are large American corporations, Ambartsumain said. Even if a smaller AI model comes along, it’s often quickly purchased by one of the tech giants, or they develop their own versions of the tool.

“I would also say that in the last year and a half, we’ve seen a big shift where no one right now is trying to develop a large language model that isn’t one of these large companies,” Ambartsumain said.

Regulations could bring markets together

There’s a “chasm” between the huge American tech companies and European startups, said Jeff Le, founder and managing partner of tech policy consultancy 100 Mile Strategies LLC. There’s a sense that Europe is trying to catch up with the Americans who have had unencumbered freedom to grow their models for years.

But Le said he thinks it’s interesting that Meta has categorized the code of practice as overreach.

“I think it’s an interesting comment at a time where Europeans understandably have privacy and data stewardship questions,” Le said. “And that’s not just in Europe. It’s in the United States too, where I think Gallup polls and other polls have revealed bipartisan support for consumer protection.”

As the code of practice says, signing now will reduce companies’ administrative burden when the AI Act goes into full enforcement in August 2027. Le said that relationships between companies that sign could garner them more understanding and familiarity when the regulatory burdens are in place.

But some may feel the transparency or copyright requirements could cost them a competitive edge, he said.

“I can see why Meta, which would be an open model, they’re really worried about (the copyright) because this is a big part of their strategy and catching up with OpenAI and (Anthropic),” Le said. “So there’s that natural tension that will come from that, and I think that’s something worth noting.”

Le said that the large AI companies are likely trying to anchor themselves toward a framework that they think they can work with, and maybe even influence. Right now, the U.S. is a patchwork of AI legislation. Some of the protections outlined in the EU AI Act are mirrored in state laws, but there’s no universal code for global companies.

The EU’s code of practice could end up being that standard-setter, Peaden said.

“Even though it’s not mandatory, guess what? People will start following,” she said. “Frankly, I would say the future of building the best model lies in a few other players. And I do think that … if four out of five of the primary AI providers are following the general purpose AI code of practice, the others will follow.”

Editor’s note: This item has been modified to revise comments from Jeff Le.

AI data centers are using more power. Regular customers are footing the bill

As power-hungry data centers proliferate, states are searching for ways to protect utility customers from the steep costs of upgrading the electrical grid, trying instead to shift the cost to AI-driven tech companies. (Dana DiFilippo/New Jersey Monitor)

As power-hungry data centers proliferate, states are searching for ways to protect utility customers from the steep costs of upgrading the electrical grid, trying instead to shift the cost to AI-driven tech companies. (Dana DiFilippo/New Jersey Monitor)

Regular energy consumers, not corporations, will bear the brunt of the increased costs of a boom in artificial intelligence that has contributed to a growth in data centers and a surge in power usage, recent research suggests.

Between 2024 and 2025, data center power usage accounted for $9 billion, or 174%, of increased power costs, a June report by Monitoring Analytics, an external market monitor for PJM Interconnection, found. PJM manages the electrical power grid and wholesale electric market for 13 states and Washington, D.C., and this spring, customers were told to expect roughly a $25 increase on their monthly electric bill starting June 1.

“The growth in data center load and the expected future growth in data center load are unique and unprecedented and uncertain and require a different approach than simply asserting that it is just supply and demand,” Monitoring Analytics’ report said.

Data centers house the physical infrastructure to power most of the computing we do today, but many AI models and the large AI companies that power them, like Amazon, Meta and Microsoft use vastly more energy than other kinds of computing. Training a single chatbot like ChatGPT uses about the same amount of energy as 100 homes over the course of a year, an AI founder told States Newsroom earlier this year.

The growth of data centers — and how much power they use — came on fast. A 2024 report by the Joint Legislative Audit and Review Commission in Virginia — known as a global hub for data centers — found that PJM forecasts it will use double the amount of average monthly energy in 2033 as it did in 2023. Without new data centers, energy use would only grow 15% by 2040, the report said.

As of July, the United States is home to more than 3,800 data centers, up from more than 3,600 in April. A majority of data centers are connected to the same electrical grids that power residential homes, commercial buildings and other structures.

“There are locational price differences, but data centers added anywhere in PJM have an effect on prices everywhere in PJM,” Joseph Bowring, president of Monitoring Analytics said.

Creeping costs

At least 36 states, both conservative and liberal, offer tax incentives to companies planning on building data centers in their states. But the increased costs that customers are experiencing have made some wonder if the projects are the economic wins they were touted as.

“I’m not convinced that boosting data centers, from a state policy perspective, is actually worth it,” said New Jersey State Sen. Andrew Zwicker, a Democrat and co-sponsor of a bill to separate data centers from regular power supply. “It doesn’t pay for a lot of permanent jobs.”

Energy cost has historically followed a socialized model, based on the idea that everyone benefits from reliable electricity, said Ari Peskoe, the director of the Electricity Law Initiative at the Harvard Law School Environmental and Energy Law Program. Although some of the pricing model is based on your actual use, some costs like new power generation, transmission and infrastructure projects are spread across all customers.

Data centers’ rapid growth is “breaking” this tradition behind utility rates.

“These are cities, these data centers, in terms of how much electricity they use,” Peskoe said. “And it happens to be that these are the world’s wealthiest corporations behind these data centers, and it’s not clear how much local communities actually benefit from these data centers. Is there any justification for forcing everyone to pay for their energy use?”

This spring in Virginia, Dominion Energy filed a request with the State Corporation Commission to increase the rates it charges by an additional $10.50 on the monthly bill of an average resident and another $10.92 per month to pay for higher fuel costs, the Virginia Mercury reported.

Dominion, and another local supplier, recently filed a proposal to separate data centers into their own rate class to protect other customers, but the additional charges demonstrate the price increases that current contracts could pass on to customers.

In June, the Federal Energy Regulatory Commission convened a technical conference to assess the adequacy of PJM’s resources and those of other major power suppliers, like Midcontinent Independent System Operator, Inc., ISO New England Inc., New York Independent System Operator, Inc., California Independent System Operator Corporation (CAISO) and Southwest Power Pool (SPP).

The current supply of power by PJM is not adequate to meet the current and future demand from large data center loads, Monitoring Analytics asserts in a report following the conference.

“Customers are already bearing billions of dollars in higher costs as a direct result of existing and forecast data center load,” the report said.

Proposed changes

One of the often-proposed solutions to soften the increased cost of data centers is to require them to bring their own generation, meaning they’d contract with a developer to build a power plant that would be big enough to meet their own demand. Though there are other options, like co-location, which means putting some of the electrical demand on an outside source, total separation is the foremost solution Bowring presents in his reports.

“Data centers are unique in terms of their growth and impact on the grid, unique in the history of the grid, and therefore, we think that’s why we think data centers should be treated as a separate class,” Bowring said.

Some data centers are already voluntarily doing this. Constellation Energy, the owner of Three Mile Island nuclear plant in central Pennsylvania, struck a $16 billion deal with Microsoft to power the tech giant’s AI energy demand needs. 

But in some states, legislators are seeking to find a more binding solution.

New Jersey Sen. Bob Smith, a Democrat who chairs the Environment and Energy Committee, authored a bill this spring that would require new AI data centers in the state to supply their power from new, clean energy sources, if other states in the region enact similar measures.

“Seeing the large multinational trillion dollar companies, like Microsoft and Meta, be willing to do things like restart Three Mile Island is crazy, but shows you their desperation,” said co-sponsor Zwicker. “And so, okay, you want to come to New Jersey? Great, but you’re not going to put the basis (of the extra cost) on ratepayers.”

New Jersey House members launched a probe into PJM’s practices as the state buys its annual utilities from the supplier at auction this month. Its July 2024 auction saw electrical costs increase by more than 800%, which contributed to the skyrocketing bills that took effect June 1.

Residents are feeling it, Smith said, and he and his co-sponsors plan to use the summer to talk to the other states within PJM’s regional transmission organization (RTO).

“Everything we’re detecting so far is they’re just as angry — the other 13 entities in PJM — as us,” Smith told States Newsroom.

Smith said they’re discussing the possibility of joining or forming a different RTO.

“We’re in the shock and horror stage where these new prices are being included in these bills, and citizens are screaming in pain,” Smith said. “A solution that I filed in the bill, is the one that says, ‘AI data centers, you’re welcome in New Jersey, but bring your own clean electricity with them so they don’t impact the ratepayers.”

Utah enacted a law this year that allows “large load” customers like data centers to craft separate contracts with utilities, and a bill in Oregon, which would create a separate customer class for data centers, called the POWER Act, passed through both chambers last month.

If passed, New Jersey’s law would join others across the country in redefining the relationship between data centers powering AI and utilities providers.

“We have to take action, and I think we have to be pretty thoughtful about this, and look at the big picture as well,” Zwicker said. ”I’m not anti-data center, I’m pro-technology, but I’m just not willing to put it on the backs of ratepayers.” 

❌