Reading view

There are new articles available, click to refresh the page.

EU’s new AI code of practice could set regulatory standard for American companies

Some American companies have agreed to comply with new, voluntary AI standards from European Union regulators, in advance of new regulations set for 2027, but others have decried them as overreach. (Photo by Santiago Urquijo/Getty Images)

Some American companies have agreed to comply with new, voluntary AI standards from European Union regulators, in advance of new regulations set for 2027, but others have decried them as overreach. (Photo by Santiago Urquijo/Getty Images)

American companies are split between support and criticism of a new voluntary European AI code of practice, meant to help tech companies align themselves with upcoming regulations from the European Union’s landmark AI Act.

The voluntary code, called the General Purpose AI Code of Practice, which rolled out in July, is meant to help companies jump-start their compliance. Even non-European companies will be required to meet certain standards of transparency, safety, security and copyright compliance to operate in Europe come August 2027.  

Many tech giants have already signed the code of practice, including Amazon, Anthropic, OpenAI, Google, IBM, Microsoft, Mistral AI, Cohere and Fastweb. But others have refused.

In July, Meta’s Chief Global Affairs Officer Joel Kaplan said in a statement on Linkedin that the company would not commit.

“Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it,” he wrote. “This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Though Google’s President of Global Affairs Kent Walker was critical of the code of practice in a company statement, Google has signed it, he said.

“We remain concerned that the AI Act and Code risk slowing Europe’s development and deployment of AI,” Walker wrote. “In particular, departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe’s competitiveness.”

The divergent approach of U.S. and European regulators has showcased a clear difference in attitude about AI protections and development between the two markets, said Vivien Peaden, a tech and privacy attorney with Baker Donelson.

She compared the approaches to cars — Americans are known for fast, powerful vehicles, while European cars are stylish and eco-friendly.

“Some people will say, I’m really worried that this engine is too powerful. You could drive the car off a cliff, and there’s not much you can do but to press the brake and stop it, so I like the European way,” Peaden said. “My response is, ‘Europeans make their car their way, right? You can actually tell the difference. Why? Because it was designed with a different mindset.”

While the United States federal government has recently enacted some AI legislation through the Take It Down Act, which prohibits AI-generated nonconsensual depictions of individuals, it has not passed any comprehensive laws on how AI may operate. The Trump administration’s recent AI Action Plan paves a clear way for AI companies to continue to grow rapidly and unregulated.

But under the EU’s AI Act, tech giants like Amazon, Google and Meta will need to be more transparent about how their models are trained and operated, and follow rules for managing systemic risks if they’d like to operate in Europe.

“Currently, it’s still voluntary,” Peaden said. “But I do believe it’s going to be one of the most influential standards in AI’s industry.”

General Purpose AI Code of Practice

The EU AI Act was passed last year to mitigate risk created by AI models, and the law creates “strict obligations” for models that are considered “high risk.” High risk AI models are those that can pose serious risks to health, safety or fundamental rights when used for employment, education, biometric identification and law enforcement, the act said.

Some AI practices, including AI-based manipulation and deception, predictions of criminal offenses, social scoring, emotion recognition in workplaces and educational institutions and real-time biometric identification for law enforcement, are considered “unacceptable risk” and are banned from use in the EU altogether.

Some of these practices, like social scoring — using an algorithm to determine access to certain privileges or opportunities like mortgages or jail time — are widely used, and often unregulated in the United States.

While AI models that will be released after Aug. 2 already have to comply with the EU AI Act’s standards, large language models (LLMs) — the technical foundation of AI models — released before that date have through August 2027 to fully comply. The code of practice released last month offers a voluntary way for companies to get into compliance early, and with more leniency than when the 2027 deadline hits, it says.

The three chapters in the code of practice are transparency, copyright and safety, and security. The copyright requirements are likely where American and European companies are highly split, said Yelena Ambartsumian, founder of tech consultancy firm Ambart Law.

In order to train LLMs, you need a broad, high-quality dataset with good grammar, Ambartsumian said. Many American LLMs turn to pirated collections of books.

“So [American companies] made a bet that, instead of paying for this content, licensing it, which would cost billions of dollars, the bet was okay, ‘we’re going to develop these LLMs, and then we’ll deal with the fallout, the lawsuits later,” Ambartsumain said. “But at that point, we’ll be in a position where, because of our war chest, or because of our revenue, we’ll be able to deal with the fallout of this fair use litigation.”

And those bets largely worked out. In two recent lawsuits, Bartz v. Anthropic and Kadrey v. Meta, judges ruled in favor of the AI developers based on the “fair use” doctrine, which allows people to use copyrighted material without permission in certain journalistic or creative contexts. In AI developer Anthropic’s case, Judge William Alsup likened the training process to how a human might read, process, and later draw on a book’s themes to create new content.

But the EU’s copyright policy bans developers from training AI on pirated content and says companies must also comply with content owners’ requests to not use their works in their datasets. It also outlines rules about transparency with web crawlers, or how AI models rake through the internet for information. AI companies will also have to routinely update documentation about their AI tools and services for privacy and security.

Those subject to the requirements of the EU’s AI Act are general purpose AI models, nearly all of which are large American corporations, Ambartsumain said. Even if a smaller AI model comes along, it’s often quickly purchased by one of the tech giants, or they develop their own versions of the tool.

“I would also say that in the last year and a half, we’ve seen a big shift where no one right now is trying to develop a large language model that isn’t one of these large companies,” Ambartsumain said.

Regulations could bring markets together

There’s a “chasm” between the huge American tech companies and European startups, said Jeff Le, founder and managing partner of tech policy consultancy 100 Mile Strategies LLC. There’s a sense that Europe is trying to catch up with the Americans who have had unencumbered freedom to grow their models for years.

But Le said he thinks it’s interesting that Meta has categorized the code of practice as overreach.

“I think it’s an interesting comment at a time where Europeans understandably have privacy and data stewardship questions,” Le said. “And that’s not just in Europe. It’s in the United States too, where I think Gallup polls and other polls have revealed bipartisan support for consumer protection.”

As the code of practice says, signing now will reduce companies’ administrative burden when the AI Act goes into full enforcement in August 2027. Le said that relationships between companies that sign could garner them more understanding and familiarity when the regulatory burdens are in place.

But some may feel the transparency or copyright requirements could cost them a competitive edge, he said.

“I can see why Meta, which would be an open model, they’re really worried about (the copyright) because this is a big part of their strategy and catching up with OpenAI and (Anthropic),” Le said. “So there’s that natural tension that will come from that, and I think that’s something worth noting.”

Le said that the large AI companies are likely trying to anchor themselves toward a framework that they think they can work with, and maybe even influence. Right now, the U.S. is a patchwork of AI legislation. Some of the protections outlined in the EU AI Act are mirrored in state laws, but there’s no universal code for global companies.

The EU’s code of practice could end up being that standard-setter, Peaden said.

“Even though it’s not mandatory, guess what? People will start following,” she said. “Frankly, I would say the future of building the best model lies in a few other players. And I do think that … if four out of five of the primary AI providers are following the general purpose AI code of practice, the others will follow.”

Editor’s note: This item has been modified to revise comments from Jeff Le.

Changes made to AI moratorium amid bill’s ‘vote-a-rama’

Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Senate leaders are bending to bipartisan opposition and softening a proposed ban on state-level regulation of artificial intelligence. (Photo by Jennifer Shutt/States Newsroom)

Editor’s Note: This story has been updated to reflect the fact that Tennessee Sen. Marsha Blackburn backed off her own proposal late on Monday.

Senate Republicans are aiming to soften a proposed 10-year moratorium on state-level artificial intelligence laws that has received pushback from congressmembers on both sides of the aisle.

Sen. Marsha Blackburn of Tennessee and Sen. Ted Cruz of Texas developed a pared down version of the moratorium Sunday that shortens the time of the ban, and makes exceptions for some laws with specific aims such as protecting children or limiting deepfake technologies.

The ban is part of the quickly evolving megabill that Republicans are aiming to pass by July 4.  The Senate parliamentarian ruled Friday that a narrower version of the moratorium could remain, but the proposed changes enact a pause — banning states from regulating AI if they want access to the $500 million in AI infrastructure and broadband funding included in the bill.

The compromise amendment brings the state-level AI ban to five years instead of 10, and carves out room for specific laws that address rules on child online safety and protecting against unauthorized generative images of a person’s likeliness, often called deepfakes. The drafted amendment, obtained and published by Politico Sunday, still bans laws that aim to regulate AI models and decisionmaking systems.

Blackburn has been vocal against the rigidity of the original 10-year moratorium, and recently reintroduced a bill called the Kids Online Safety Act, alongside Connecticut Democrat Sen. Richard Blumenthal, Senate Majority Leader John Thune of South Dakota and Senate Minority Leader Chuck Schumer of New York. The bill would require tech companies to take steps to prevent potentially harmful material, like posts about eating disorders and instances of online bullying, from impacting children.

Blackburn said in a statement Sunday that she was “pleased” that Cruz agreed to update the provisions to exclude laws that “protect kids, creators, and other vulnerable individuals from the unintended consequences of AI.” This proposed version of the amendment would allow her state’s ELVIS Act, which prohibits people from using AI to mimic a person’s voice in the music industry without their permission, to continue to be enforced.

Late Monday, however, Blackburn backed off her own amendment, saying the language was “unacceptable” because it did not go as far as the Kids Online Safety Act in allowing states to protect children from potential harms of AI. Her move left the fate of the compromise measure in doubt as the Senate continued to debate the large tax bill to which it was attached.

Though introduced by Senate Republicans, the AI moratorium was losing favor of GOP congressmembers and state officials.

Senators Josh Hawley of Missouri, Jerry Moran of Kansas and Ron Johnson of Wisconsin were expected to vote against the moratorium, and Georgia Rep. Marjorie Taylor Greene said during a congressional hearing in June that she had changed her mind, after initially voting for the amendment.

“I support AI in many different faculties,” she said during the June 5 House Oversight Committee hearing. “However, I think that at this time, as our generation is very much responsible, not only here in Congress, but leaders in tech industry and leaders in states and all around the world have an incredible responsibility of the future and development regulation and laws of AI.”

On Friday, a group of 17 Republican governors wrote in a letter to Thune and Speaker Mike Johnson, asking them to remove the ban from the megabill.

“While the legislation overall is very strong, there is one small portion of it that threatens to undo all the work states have done to protect our citizens from the misuse of artificial intelligence,” the governors wrote. “We are writing to encourage congressional leadership to strip this provision from the bill before it goes to President Trump’s desk for his signature.”

Alexandra Reeve Givens, President and CEO of tech policy organization Center for Democracy and Technology said in a statement Monday that all versions of the AI moratorium would hurt state’s abilities to protect people from “potentially devastating AI harms.”

“Despite the multiple revisions of this policy, it’s clear that its drafters are not considering the moratorium’s full implications,” Reeve Givens said. “Congress should abandon this attempt to stifle the efforts of state and local officials who are grappling with the implications of this rapidly developing technology, and should stop abdicating its own responsibility to protect the American people from the real harms that these systems have been shown to cause.”

The updated language proposed by Blackburn and Cruz isn’t expected to be a standalone amendment to the reconciliation bill, Politico reported, rather part of a broader amendment of changes as the Senate continues their “vote-a-rama” on the bill this week. 

❌