Reading view

There are new articles available, click to refresh the page.

Trump’s ‘truth seeking’ AI executive order is a complex, expensive policy, experts say

U.S. President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. (Photo by Chip Somodevilla/Getty Images)

U.S. President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. (Photo by Chip Somodevilla/Getty Images)

An executive order signed by President Donald Trump last week seeks to remove “ideological agendas” from artificial intelligence models sold to the federal government, but it’s not exactly clear how the policy would be enforced, nor how tech companies would test their models for these standards, technologists and policy experts say.

The executive order says that AI companies must be “truth-seeking” and “ideologically neutral,” to work with the government, and mentions removing ideologies involving diversity, equity and inclusion from AI models.

“Really what they’re talking about is putting limitations on AI models that produce, in their mind, ideology that they find objectionable,” said Matthew  F. Ferraro, a partner at Crowell & Moring’s privacy and cybersecurity group.

Ferraro said that refinement of AI models and testing them for bias is a goal that gets support on both sides of the aisle. Ferraro spent two years working with the Biden administration on the development of its AI policy, and said that Trump’s AI Action Plan, rolled out last week, shares many pillars in common with the Biden administration’s AI executive order.

Both named the potentials for biohazard threats enabled by AI and the impact on the labor force in their AI action plans, and the importance of growing AI for American innovation, Ferraro said. Biden’s plan prioritized protections against harms, while Trump prioritized innovation above all else.

“It just so happened the Biden administration put those concerns first, not last, right?” Ferraro said. “But I was struck when reading the Trump AI action plan how similar the provisions that made recommendations the Department of Labor were to initiatives under the Biden administration.”

In the executive order, Trump assigns the Office of Management and Budget to issue guidance for compliance in the next 120 days. But Ferraro said as it stands, the order doesn’t provide enough clarity around the technical compliance companies would need to meet, nor how to reach a standard of “truth.”

“It’s not at all clear to me, one, what exactly this executive order seeks to prevent, beyond sort of a gestalt of political speech that the administration found objectionable,” Ferraro said. “Second, it’s not all clear to me, technically, how this is possible. And third, it’s not at all clear to me that even by its own terms, it’s going to be enforceable in any comprehensive way.”

Technical challenges

AI companies looking to contract with the government under the policies in this executive order would likely need to prepare for ongoing testing of their models, said David Acosta, cofounder and chief artificial intelligence officer of ARBOai.

When an AI model is in its early days of training, developers are mostly looking for its technical functionality. But ongoing reviews, or audits, of the model’s growth and accuracy can be done in a process called “red teaming,” where a third party introduces testing scenarios that help with compliance, bias testing and accuracy.

Auditing is a part of what Acosta’s company specializes in, and he said his team has been flooded with requests since last year. The team anticipated there could be a struggle around objectivity in AI.

“This is something that essentially we predicted late last year, as far as where this narrative would come from,” Acosta said. “Who would try to steer it, load it and push it forward, because essentially it’s going to create a culture war.”

Many AI models have a “black box” algorithm, meaning that the decision-making process of the model is unclear and difficult to understand, even for the developers. Because of this, constant training and testing for accuracy is needed to assess if the model is working as intended.

Acosta said that AI models should seek to meet certain accuracy requirements. A model should have a hallucination score — how often a model produces fabricated, inaccurate, or misleading information — below 5%. A model should also have “uncertainty tax,” the ability to tell a user that there’s a level of uncertainty in its output. 

This testing is expensive for AI firms, though, Acosta said. If the federal Office of Management and Budget requires some sort of testing to meet its requirements, it may limit smaller firms from being able to contract with the government.

Third-party red teaming uses thousands of prompts to prod models for hallucinations and politically loaded responses, Acosta said. A well-balanced model will refuse to answer questions using certain ideology or rhetoric, or produce primary sources to validate factual information.

“Explainable AI and reasoning have always, always, always, always been important in order to make sure that we can substantiate what the feedback is,” Acosta said. “But at the end of the day, it always comes down to who it is that’s training these machines, and if they can remain neutral.”

What is “truth-seeking?”

The dataset that an AI model is trained on can determine what “truth” is. Most datasets are based on human-created data, which in itself can contain errors and bias. It’s a system many laws around discrimination in AI are aiming to address. 

“One would hope that there’s a surface level of, not unified — because you’re never going to get a 100% — but a majority agreement on what absolute truth is and what it isn’t,” Acosta said.

He used an example of textbooks from decades ago; much of the information widely known as truth and taught in schools in the 1980s is now outdated. If an AI model is trained on those textbooks, its answers would not stand up to an AI model trained on a much larger, more diverse dataset.

But the removal of “ideological” information like race and sex from AI model training, as outlined in Trump’s executive order, poses a risk for inaccurate results. 

“It’s such a slippery slope, because who decides? Data sanitization – when you’re filtering the training of data to remove ‘overtly ideological sources’ — I mean the risk there is, that in itself, is a subjective process, especially when you’re coming to race, gender and religion,” Acosta said.

The concept of ideological truth is not something the AI algorithm considers, Ferraro said. AI does not produce truth, it produces probabilities.

“They simply produce the most common next token,” based on their training, Ferraro said. “The AI does not know itself what is true or false, or what is, for that matter, ideological or not. And so preventing outputs that expressed [Trump’s] favorite ideological opinions can simply basically exceed the ability of the model builders to build.”

Compliance

Enforcement of this executive order will require clarity from the OMB, said Peter Salib, an assistant professor of law at the University of Houston Law Center.

“It seems like there is one thing happening at the level of rhetoric and then another thing happening at the level of law,” Salib said.

The OMB would need to define the political ideology outlined in the executive order, and what is included or excluded, he said. For example, is a dataset tracking racial wealth inequality over 150 years considered some kind of DEI?

The order also puts companies that wish to comply with the executive order in a position to decide if they want to develop two versions of their AI models — one for the private sector, and one that fits Trump’s standards, Ferraro said.

There are exceptions carved out for some AI systems, including those contracted by national security agencies, the executive order said. Salib also highlighted a section that walks back some of the intent of the order, saying that it will consider vendors compliant as long as they disclose the prompts, specifications, evaluations and other training information for their model to the government. 

“It’s saying you don’t actually have to make your AI ideologically neutral,” Salib said. “You just have to share with the government the system prompts and the specifications you used when training the AI. So the AI can be as woke or as right wing, or as truth seeking or as falsity seeking as you like, and it counts as compliance with the neutrality provision if you just disclose.”

In this case, the executive order could turn out to be a de facto transparency regulation for AI, more so than one that imposes requirements on AI’s ideology, he said.

The rollout of Trump’s AI Action Plan and the executive orders that accompanied it, including one that fast tracks data center permitting and expands AI exports, are the first real look into how the administration intends to handle the quickly growing AI industry.

If the OMB rolls out technological and accuracy-focused requirements for the order, it could be a win for those looking for more transparency in AI, Acosta said. But the office needs to be clear in its requirements and seek true neutrality.

“I think the intent is well with this, but there needs to be much, much, much more refinement moving forward,” Acosta said. 

Trump’s AI Action Plan removes ‘red tape’ for AI developers and data centers, punishes states that act alone

David Sacks, U.S. President Donald Trump's "AI and Crypto Czar", speaks to President Trump as he signs a series of executive orders in the Oval Office of the White House on Jan. 23, 2025 in Washington, D.C. Trump signed a range of executive orders pertaining to issues including crypto currency and artificial intelligence. (Photo by Anna Moneymaker/Getty Images)

David Sacks, U.S. President Donald Trump's "AI and Crypto Czar", speaks to President Trump as he signs a series of executive orders in the Oval Office of the White House on Jan. 23, 2025 in Washington, D.C. Trump signed a range of executive orders pertaining to issues including crypto currency and artificial intelligence. (Photo by Anna Moneymaker/Getty Images)

The Trump administration wants to greatly expand the development and use of advanced artificial intelligence, including rolling back environmental rules to spur building of power-thirsty data centers and punishing states that attempt to regulate AI on their own.

The administration’s action plan, called “Winning the AI Race: America’s AI Action Plan,” released on Wednesday, is a result of six months of research by tech advisors, after Trump removed President Joe Biden’s signature AI guardrails on his first day in office. The plan takes a hands-off approach to AI safeguards, and invests in getting more American workers to use AI in their daily lives.

“To win the AI race, the U.S. must lead in innovation, infrastructure, and global partnerships,” AI and Crypto Czar David Sacks said in a statement. “At the same time, we must center American workers and avoid Orwellian uses of AI. This Action Plan provides a roadmap for doing that.”

The action plan outlines three major pillars — accelerate AI innovation, build American AI infrastructure and lead in international AI diplomacy and security.

The Trump administration says that to accelerate AI in the U.S., it needs to “remove red tape,” around “onerous” AI regulations. The plan recommends the Office of Science and Technology Policy inquire with businesses and the public about federal regulations that hinder AI innovation, and suggests the federal government end funding to states “with burdensome AI regulations.”

The plan does say that these actions should not interfere with states’ ability to pass AI laws that are not “unduly restrictive,” despite unsuccessful attempts by Congressional Republicans to impose an AI moratorium for the states.

The plan also says that free speech should be prioritized in AI, saying models must be trained so that “truth, rather than social engineering agendas” are the focus of model outputs. The plan recommends that the Department of Commerce and National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, DEI and climate change.

The Trump administration also pushes for AI to be more widely adopted in government roles, manufacturing, science and in the Department of Defense, and proposes increased funding and regulatory sandboxes — separate trial spaces for AI to be developed — to do so.

To support the proposed increases in AI use, the plan outlines a streamlined permitting process for data centers, which includes lowering or dropping environmental regulations under the Clean Air Act, the Clean Water Act and others. It also proposes making federal lands available for data center construction, and a push that American products should be used in building the infrastructure.

The Action Plan warns of cybersecurity risks and potential exposure to adversarial threats, saying that the government must develop secure frontier AI systems with national security agencies and develop “AI compute control enforcement,” to ensure security in AI systems and with semiconductor chips. It encourages collaboration with “like-minded nations” working toward AI models with shared values, but says it will counter Chinese influence.

“These clear-cut policy goals set expectations for the Federal Government to ensure America sets the technological gold standard worldwide, and that the world continues to run on American technology,” Secretary of State and Acting National Security Advisor Marco Rubio said in a statement.

The policy goals outlined in the plan fall in line with the deregulatory attitude Trump took during his campaign, as he more closely aligned himself with Silicon Valley tech giants, many of whom turned Trump donors. The plan paves the way for continued unfettered growth of American AI models, and outlines the huge energy and computing power needed to keep up with those goals.

In an address at the “Winning the AI Race” Summit Wednesday evening, President Donald Trump called for a “single federal standard” regulating AI, not a state-by-state approach.

“You can’t have three or four states holding you up. You can’t have a state with standards that are so high that it’s going to hold you up,” Trump said. “You have to have a federal rule and regulation.”

The summit was hosted by the Hill & Valley Forum, a group of lawmakers and venture capitalists and the All‑In Podcast, which is co-hosted by AI Czar Sacks, 

In addition to discussing the AI action plan, Trump signed executive orders to fast track data center permitting, expand AI exports including chips, software and data storage, and one that prohibits the federal government from procuring AI that has “partisan bias or ideological agendas.”

He spoke about the need for the U.S. to stay ahead in the global AI race, saying that the technology brings the “potential for bad as well as for good,” but that wasn’t reason enough to “retreat” from technological advancement. The U.S. is entering a “golden age,” he said in his speech.

“It will be powered by American energy. It will be run on American technology improved by American artificial intelligence, and it will make America richer, stronger, greater, freer, and more powerful than ever before,” Trump said.

During the address, Trump addressed his evolving relationship with tech CEOs, calling out Amazon, Google, Microsoft for investing $320 billion in data centers and AI infrastructure this year.

“I didn’t like them so much my first term, when I was running, I wouldn’t say I was thrilled with them, but I’ve gotten to know them and like them,” Trump said. “And I think they got to like me, but I think they got to like my policies, maybe much more than me.”

Sam Altman, CEO of OpenAI — one of the tech giants that stands to flourish under the proposed policies — spoke Tuesday about the productivity and innovation potential that AI has unlocked. The growth of AI in the last five years has surprised even him, Altman said. But it also poses very real risks, he said, mentioning emotional attachment and overreliance on AI and foreign risks.

“Without a drop of malevolence from anyone, society can just veer in a sort of strange direction,” Altman said.

Senate votes 99-1 to remove AI moratorium from megabill

Republican Sens. Ted Cruz of Texas and Marsha Blackburn of Tennessee, shown here in a June 17, 2025, committee hearing, proposed paring down the moratorium on state-based AI laws included in the budget bill, but the provision still proved unpopular. On Monday, Blackburn cosponsored an amendment to remove the measure. (Photo by Kayla Bartkowski/Getty Images)

Republican Sens. Ted Cruz of Texas and Marsha Blackburn of Tennessee, shown here in a June 17, 2025, committee hearing, proposed paring down the moratorium on state-based AI laws included in the budget bill, but the provision still proved unpopular. On Monday, Blackburn cosponsored an amendment to remove the measure. (Photo by Kayla Bartkowski/Getty Images)

A moratorium on state-based artificial intelligence laws was struck from the “Big Beautiful Bill” Monday night in a 99-1 vote in the U.S. Senate, after getting less and less popular with state and federal lawmakers, state officials and advocacy groups since it was introduced in May.

The moratorium had evolved in the seven weeks since it was introduced into the megabill. At an early May Senate Commerce Committee session, Sen. Ted Cruz of Texas said it was in his plans to create “a regulatory sandbox for AI” that would prevent state overregulation and promote the United States’ AI industry.

GOP senators initially proposed a 10-year ban on all state laws relating to artificial intelligence, saying the federal government should be the only legislative body to regulate the technology. Over several hearings, congressional members and expert witnesses debated the level of involvement the federal government should take in regulating AI. They discussed state’s rightssafety concerns for the technology and how other governmental bodies, like the European Union, are regulating AI.

Over the weekend, Sen. Marsha Blackburn of Tennessee and Cruz developed a pared down version of the moratorium that proposed a five-year ban, and made exceptions for some laws with specific aims such as protecting children or limiting deepfake technologies. Changes over the weekend also tied state’s ability to collect federal funding to expand broadband access to their willingness to nullify their existing AI laws.

Monday night, an amendment to remove the moratorium from the budget bill — cosponsored by Blackburn and Sen. Maria Cantwell, a Washington Democrat — was passed 99-1.

“The Senate came together tonight to say that we can’t just run over good state consumer protection laws,” Cantwell said in a statement. “States can fight robocalls, deepfakes and provide safe autonomous vehicle laws. This also allows us to work together nationally to provide a new federal framework on Artificial Intelligence that accelerates U.S. leadership in AI while still protecting consumers.” 

The “overwhelming” vote reflects how unpopular unregulated AI is among voters and legislators in both parties, said Alexandra Reeve Givens, president and CEO of the tech policy organization, Center for Democracy and Technology, in a statement.

“Americans deserve sensible guardrails as AI develops, and if Congress isn’t prepared to step up to the plate, it shouldn’t prevent states from addressing the challenge,” Reeve Givens said. “We hope that after such a resounding rebuke, Congressional leaders understand that it’s time for them to start treating AI harms with the seriousness they deserve.”

❌