Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Trump’s ‘truth seeking’ AI executive order is a complex, expensive policy, experts say

4 August 2025 at 10:00
U.S. President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. (Photo by Chip Somodevilla/Getty Images)

U.S. President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. (Photo by Chip Somodevilla/Getty Images)

An executive order signed by President Donald Trump last week seeks to remove “ideological agendas” from artificial intelligence models sold to the federal government, but it’s not exactly clear how the policy would be enforced, nor how tech companies would test their models for these standards, technologists and policy experts say.

The executive order says that AI companies must be “truth-seeking” and “ideologically neutral,” to work with the government, and mentions removing ideologies involving diversity, equity and inclusion from AI models.

“Really what they’re talking about is putting limitations on AI models that produce, in their mind, ideology that they find objectionable,” said Matthew  F. Ferraro, a partner at Crowell & Moring’s privacy and cybersecurity group.

Ferraro said that refinement of AI models and testing them for bias is a goal that gets support on both sides of the aisle. Ferraro spent two years working with the Biden administration on the development of its AI policy, and said that Trump’s AI Action Plan, rolled out last week, shares many pillars in common with the Biden administration’s AI executive order.

Both named the potentials for biohazard threats enabled by AI and the impact on the labor force in their AI action plans, and the importance of growing AI for American innovation, Ferraro said. Biden’s plan prioritized protections against harms, while Trump prioritized innovation above all else.

“It just so happened the Biden administration put those concerns first, not last, right?” Ferraro said. “But I was struck when reading the Trump AI action plan how similar the provisions that made recommendations the Department of Labor were to initiatives under the Biden administration.”

In the executive order, Trump assigns the Office of Management and Budget to issue guidance for compliance in the next 120 days. But Ferraro said as it stands, the order doesn’t provide enough clarity around the technical compliance companies would need to meet, nor how to reach a standard of “truth.”

“It’s not at all clear to me, one, what exactly this executive order seeks to prevent, beyond sort of a gestalt of political speech that the administration found objectionable,” Ferraro said. “Second, it’s not all clear to me, technically, how this is possible. And third, it’s not at all clear to me that even by its own terms, it’s going to be enforceable in any comprehensive way.”

Technical challenges

AI companies looking to contract with the government under the policies in this executive order would likely need to prepare for ongoing testing of their models, said David Acosta, cofounder and chief artificial intelligence officer of ARBOai.

When an AI model is in its early days of training, developers are mostly looking for its technical functionality. But ongoing reviews, or audits, of the model’s growth and accuracy can be done in a process called “red teaming,” where a third party introduces testing scenarios that help with compliance, bias testing and accuracy.

Auditing is a part of what Acosta’s company specializes in, and he said his team has been flooded with requests since last year. The team anticipated there could be a struggle around objectivity in AI.

“This is something that essentially we predicted late last year, as far as where this narrative would come from,” Acosta said. “Who would try to steer it, load it and push it forward, because essentially it’s going to create a culture war.”

Many AI models have a “black box” algorithm, meaning that the decision-making process of the model is unclear and difficult to understand, even for the developers. Because of this, constant training and testing for accuracy is needed to assess if the model is working as intended.

Acosta said that AI models should seek to meet certain accuracy requirements. A model should have a hallucination score — how often a model produces fabricated, inaccurate, or misleading information — below 5%. A model should also have “uncertainty tax,” the ability to tell a user that there’s a level of uncertainty in its output. 

This testing is expensive for AI firms, though, Acosta said. If the federal Office of Management and Budget requires some sort of testing to meet its requirements, it may limit smaller firms from being able to contract with the government.

Third-party red teaming uses thousands of prompts to prod models for hallucinations and politically loaded responses, Acosta said. A well-balanced model will refuse to answer questions using certain ideology or rhetoric, or produce primary sources to validate factual information.

“Explainable AI and reasoning have always, always, always, always been important in order to make sure that we can substantiate what the feedback is,” Acosta said. “But at the end of the day, it always comes down to who it is that’s training these machines, and if they can remain neutral.”

What is “truth-seeking?”

The dataset that an AI model is trained on can determine what “truth” is. Most datasets are based on human-created data, which in itself can contain errors and bias. It’s a system many laws around discrimination in AI are aiming to address. 

“One would hope that there’s a surface level of, not unified — because you’re never going to get a 100% — but a majority agreement on what absolute truth is and what it isn’t,” Acosta said.

He used an example of textbooks from decades ago; much of the information widely known as truth and taught in schools in the 1980s is now outdated. If an AI model is trained on those textbooks, its answers would not stand up to an AI model trained on a much larger, more diverse dataset.

But the removal of “ideological” information like race and sex from AI model training, as outlined in Trump’s executive order, poses a risk for inaccurate results. 

“It’s such a slippery slope, because who decides? Data sanitization – when you’re filtering the training of data to remove ‘overtly ideological sources’ — I mean the risk there is, that in itself, is a subjective process, especially when you’re coming to race, gender and religion,” Acosta said.

The concept of ideological truth is not something the AI algorithm considers, Ferraro said. AI does not produce truth, it produces probabilities.

“They simply produce the most common next token,” based on their training, Ferraro said. “The AI does not know itself what is true or false, or what is, for that matter, ideological or not. And so preventing outputs that expressed [Trump’s] favorite ideological opinions can simply basically exceed the ability of the model builders to build.”

Compliance

Enforcement of this executive order will require clarity from the OMB, said Peter Salib, an assistant professor of law at the University of Houston Law Center.

“It seems like there is one thing happening at the level of rhetoric and then another thing happening at the level of law,” Salib said.

The OMB would need to define the political ideology outlined in the executive order, and what is included or excluded, he said. For example, is a dataset tracking racial wealth inequality over 150 years considered some kind of DEI?

The order also puts companies that wish to comply with the executive order in a position to decide if they want to develop two versions of their AI models — one for the private sector, and one that fits Trump’s standards, Ferraro said.

There are exceptions carved out for some AI systems, including those contracted by national security agencies, the executive order said. Salib also highlighted a section that walks back some of the intent of the order, saying that it will consider vendors compliant as long as they disclose the prompts, specifications, evaluations and other training information for their model to the government. 

“It’s saying you don’t actually have to make your AI ideologically neutral,” Salib said. “You just have to share with the government the system prompts and the specifications you used when training the AI. So the AI can be as woke or as right wing, or as truth seeking or as falsity seeking as you like, and it counts as compliance with the neutrality provision if you just disclose.”

In this case, the executive order could turn out to be a de facto transparency regulation for AI, more so than one that imposes requirements on AI’s ideology, he said.

The rollout of Trump’s AI Action Plan and the executive orders that accompanied it, including one that fast tracks data center permitting and expands AI exports, are the first real look into how the administration intends to handle the quickly growing AI industry.

If the OMB rolls out technological and accuracy-focused requirements for the order, it could be a win for those looking for more transparency in AI, Acosta said. But the office needs to be clear in its requirements and seek true neutrality.

“I think the intent is well with this, but there needs to be much, much, much more refinement moving forward,” Acosta said. 

OpenAI CEO Sam Altman says AI has life-altering potential, both for good and ill

24 July 2025 at 10:15
OpenAI CEO Sam Altman shared his view of the promise and peril of advanced artificial intelligence at a Federal Reserve conference in Washington, D.C. on July 22, 2025. (Photo by Andrew Harnik/Getty Images)

OpenAI CEO Sam Altman shared his view of the promise and peril of advanced artificial intelligence at a Federal Reserve conference in Washington, D.C. on July 22, 2025. (Photo by Andrew Harnik/Getty Images)

For as much promise as artificial intelligence shows in making life better, OpenAI CEO Sam Altman is worried.

The tech leader who has done so much to develop AI and make it accessible to the public says the technology could have life-altering effects on nearly everything, particularly if deployed by the wrong hands.

There’s a possible world in which foreign adversaries could use AI to design a bio weapon to take down the power grid, or break into financial institutions and steal wealth from Americans, he said. It’s hard to imagine without superhuman intelligence, but it becomes “very possible,” with it, he said.

“Because we don’t have that, we can’t defend against it,” Altman said at a Federal Reserve conference this week in Washington, D.C. “We continue to like, flash the warning lights on this. I think the world is not taking us seriously. I don’t know what else we can do there, but it’s like, this is a very big thing coming.”

Altman joined the conference Tuesday to speak about AI’s role in the financial sector, but also spoke about how it is changing the workforce and innovation. The growth of AI in the last five years has surprised even him, Altman said.

He acknowledged real fear that the technology has potential to grow beyond the capabilities that humans prompt it for, but said the time and productivity savings have been undeniable. 

OpenAI’s most well-known product, ChatGPT, was released to the public in November 2022, and its current model, GPT-4o, has evolved. Last week, the company had a model that achieved “gold-level performance,” akin to operating as well as humans that are true experts in their field, Altman said.

Many have likened the introduction of AI to the invention of the internet, changing so much of our day-to-day lives and workplaces. But Altman instead compared it to the transistor, a foundational piece of hardware invented in the 1940s that allowed electricity to flow through devices.

“It changed what we were able to build. It became part of, kind of, everything pretty quickly,” Altman said. “And in the same way, I don’t think you’ll be talking about AI companies for very long, you will just expect products and services to use this technology.”

When prompted by the Federal Reserve’s Vice Chair for Supervision Michelle Bowman to predict how AI will continue to evolve the workforce, Altman said he couldn’t make specific predictions.

“There are cases where entire classes of jobs will go away,” Altman said. “There are entirely new classes of jobs that will come and largely, I think this will look somewhat like most of history, and that the tools people have to use their jobs will let them do more, achieve things in new ways.” 

One of the unexpected upsides to the rollout of GPT has been how much it is used by small businesses, Altman said. He shared a story of an Uber driver who told him he was using ChatGPT for legal consultations, customer support, marketing decisions and more.

“It was not like he was taking jobs from other people. His business just would have failed,” Altman said. “He couldn’t pay for the lawyers. He couldn’t pay for the customer support people.”

Altman said he was surprised that the financial industry was one of the first to begin integrating GPT models into their work because it is highly regulated, but some of their earliest enterprise partners have been financial institutions like Morgan Stanley. The company is now increasingly working with the government, which has its own standards and procurement process for AI, to roll out OpenAI services to its employees.

Altman acknowledged the risks AI poses in these regulated institutions, and with the models themselves. Financial services are facing a fraud problem, and AI is only making it worse — it’s easier than ever to fake voice or likeness authentication, Altman said.

AI decisionmaking in financial and other industries presents data privacy concerns and potential for discrimination. Altman said GPT’s model is “steerable,” in that you can tell it to not consider factors like race or sex in making a decision, and that much of the bias in AI comes from the humans themselves.

“I think AIs are dispassionate and unemotional,” Altman said. “And I think it’ll be possible for AI — correctly built — to be a significant de-biasing force in many industries, and I think that’s not what many people thought, including myself, with the way we used to do AI.”

As much as Altman touted GPT and other AI models’ ability to increase productivity and save humans time, he also spoke about his concerns.

He said that though it’s been greatly improved in more recent models, AI hallucinations, or models that produce inaccurate or made-up outputs, are possible. He also spoke of a newer concept called prompt injections, the idea that a model that has learned personal information can be tricked into telling a user something they shouldn’t know.

In addition to the threat of foreign adversaries using AI for harm, Altman said he has two other major concerns for the evolution of AI. It feels very unlikely, he said, but “loss of control,” or the idea that AI overpowers humans, is possible.

What concerns him the most is the idea that models could get so integrated into society and get so smart that humans become reliant on them without realizing.

“And even without a drop of malevolence from anyone, society can just veer in a sort of strange direction,” he said.

There are mild cases of this happening, Altman said, like young people overrelying on ChatGPT make emotional, life-altering decisions for them.

“We’re studying that. We’re trying to understand what to do about it,” Altman said. “Even if ChatGPT gives great advice, even if chatGPT gives way better advice than any human therapist, something about kind of collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.” 

❌
❌