Reading view

There are new articles available, click to refresh the page.

AI is making it easier for bad actors to create biosecurity threats

The spread of artificial intelligence worries biosecurity experts, who say the technology could lead to accidental or deliberate creation and release of dangerous diseases and toxic substances. (Photo by LuShaoJi/Getty Images)

The spread of artificial intelligence worries biosecurity experts, who say the technology could lead to accidental or deliberate creation and release of dangerous diseases and toxic substances. (Photo by LuShaoJi/Getty Images)

Artificial intelligence is helping accelerate the pace of scientific discovery, but the technology also makes it easier than ever to create biosecurity threats and weapons, cybersecurity experts say. 

It’s an issue that currently flies under the radar for most Americans, said Lucas Hansen, cofounder of AI education nonprofit CivAI.

The COVID-19 pandemic increased awareness of biosecurity measures globally, and some instances of bioterrorism, like the 2001 anthrax attacks, are well known. But advancements in AI have made information about how to create biosecurity threats, like viruses, bacteria and toxins, so much more accessible in just the last year, Hansen said.  

“Many people on the face of the planet already could create a bio weapon,” Hansen said. “But it’s just pretty technical and hard to find. Imagine AI being used to [multiply] the number of people that are capable of doing that.”

It’s an issue that OpenAI CEO Sam Altman spoke about at a Federal Reserve conference in July. 

“We continue to like, flash the warning lights on this,” Altman said. “I think the world is not taking us seriously. I don’t know what else we can do there, but it’s like, this is a very big thing coming.”

AI increasing biosecurity threats

Hansen said there’s primarily two ways he believes AI could be used to create biosecurity threats. Much less common, he believes, would be using AI to make more dangerous bioweapons than have ever existed before using technologies that enable the engineering of biological systems, such as creating new viruses or toxic substances. 

Second, and more commonly, Hansen said, AI is making information about existing harmful viruses or toxins much more readily accessible. 

Consider the polio virus, Hansen said. There are plenty of scientific journals that share information on the origins and growth of polio and other viruses that have been mostly eradicated, but the average person would have to do much research and data collection to piece together how to recreate it. 

A few years ago, AI models didn’t have great metacognition, or ability to give instructions, Hansen said. But in the last year, updates to models like Claude and ChatGPT have been able to interpret more information and fill in the gaps. 

Paromita Pain, an associate professor of global media at the University of Nevada, Reno and an affiliated faculty member of the university’s cybersecurity center, said she believes there’s a third circumstance that could be contributing to biosecurity threats: accidents. The increased access to information by people not properly trained to have it could have unintended consequences. 

“It’s essentially like letting loose teenagers in the lab,” Pain said. “It’s not as if people are out there to willingly do bad, like, ‘I want to create this pathogen that will wipe out mankind.’ Not necessarily. It’s just that they don’t know that if you are developing pathogens, you need to be careful.”

For those that are looking to do harm, though, it’s not hard, Hansen said. CivAI offers demos to show how AI can be used in various scenarios, with a goal of highlighting the potential harms the technology can cause if not used responsibly. 

In a demo not available to the public, Hansen showed States Newsroom how someone may use a current AI model to assist them in creating a biothreat. CivAI keeps the example private, so as to not inspire any nefarious actions, Hansen said. 

Though many AI models are trained to flag and not to respond to dangerous requests, like how to build a gun or how to recreate a virus, many can be “jailbroken” easily, with a few prompts or lines of code, essentially tricking the AI into answering questions it was instructed to ignore.

Hansen walked through the polio virus example, prompting a jailbroken version of Claude 4.0 Sonnet to give him instructions for recreating the virus. Within a few seconds, the model provided 13 detailed steps, including directions like “order the custom plasmid online,” with links to manufacturers. 

The models are scraping information from a few public research papers about the polio virus, but without the step by step instructions, it would be very hard to find what you’re looking for, make a plan and find the materials you’d need. The models sometimes add information to supplement the scientific papers, helping non-expert users understand complex language, Hansen said. 

It would still take many challenging steps, including accessing lab equipment and rare materials, to recreate the virus, Hansen said, but AI has made access to the core information behind these feats so much more available. 

“AI has turned bioengineering from a Ph.D. level skill set to something that an ambitious high school student could do with some of the right tools,” said Neil Sahota, an AI advisor to the United Nations, and a cofounder of its AI for Good initiative.

CivAI estimates that since 2022, the number of people who would be capable of recreating a virus like polio with the tools and resources publicly available has gone from 30,000 globally to 200,000 today because of AI. They project 1.5 million people could be capable in 2028. An increase in the number of languages that AI models are fluent in also increases the chances of a global issue, Hansen said. 

“I think the language thing is really, really important, because part of what we’re considering here is the number of people that are capable of doing these things and removing a language barrier is a pretty big deal,” he said.

How is the government addressing it? 

The current Trump administration and the previous Biden administration introduced similar strategies to addressing the threats. In Biden’s October 2023 Executive Order “Safe, Secure, and Trustworthy Development and Use of AI,” Biden sought to create guidelines to evaluate and audit AI capabilities “through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.”

Trump’s AI Action Plan, which rolled out in July, said AI could “unlock nearly limitless potential in biology,” but could also “create new pathways for malicious actors to synthesize harmful pathogens and other biomolecules.” 

In his action plan, he said he wishes to require scientific institutions that receive federal funding to verify customers, and create enforcement guidelines. The plan also says the Office of Science and Technology Policy should develop a way for nucleic acid synthesis — the process of creating DNA and RNA — providers to share data and screen for malicious customers.

Sahota said the potential benefits of bioengineering AI make regulating it complicated. The models can help accelerate vaccine development and research into genetic disorders, but can also be used nefariously.

“AI in itself is not good or evil, it’s just a tool,” Sahota said. “And it really depends on how people use it. I don’t think like a bad actor, and many people don’t, so we’re not thinking about how they may weaponize these tools, but someone probably is.”

California aimed to address biosecurity in SB 1047 last year, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” which sought to regulate foundational AI models and impose obligations on companies that develop them to ensure safety and security measures. 

The act outlines many potential harms, but among them was AI’s potential to help “create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons.”

After passing in both chambers, the Act was vetoed by Gov. Gavin Newsom in September, for potentially “curtailing the very innovation that fuels advancement in favor of the public good.”  

Pain said few international frameworks exist for how to share biological data and train AI systems around biosecurity, and it’s unclear whether AI developers, biologists, publishers or governments could be held accountable for its misuse. 

“Everything that we are talking about when it comes to biosecurity and AI has already happened without the existence of AI,” she said of previous biothreats.

Sahota said he worries we may need to see a real-life example of AI being weaponized for a biological threat, “where we feel the pain on a massive scale,” before governments get serious about regulating the technology.

Hansen agrees, and he predicts those moments may be coming. While some biological attacks could come from coordinated groups aiming to pull off a terroristic incident, Hansen said he worries about the “watch the world burn” types — nihilistic individuals that have historically turned to mass shootings. 

“Right now, they look for historical precedent on how to cause collateral damage, and the historical precedent that they see is public shootings,” Hansen said. “I think very easily it could start to be the case that deploying bio weapons becomes pretty normal. I think after the first time that that happens in real life, we’ll start seeing a lot of copycats. And that makes me pretty, pretty nervous.”

This story was originally produced by National, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

Free AI testing platform rolled out to federal employees

OpenAI CEO Sam Altman (right), accompanied by President Donald Trump, speaks during a news conference at the White House on Jan. 21, 2025. Trump announced an investment in artificial intelligence (AI) infrastructure. (Photo by Andrew Harnik/Getty Images)

OpenAI CEO Sam Altman (right), accompanied by President Donald Trump, speaks during a news conference at the White House on Jan. 21, 2025. Trump announced an investment in artificial intelligence (AI) infrastructure. (Photo by Andrew Harnik/Getty Images)

As a part of President Donald Trump’s AI Action Plan, which rolled out at the end of last month, the U.S. General Services Administration launched a platform Thursday that will allow government employees to experiment with artificial intelligence tools.

USAi.gov allows federal workers to use generative AI tools, like chatbots, code builders and document summarization, for free. The platform is meant to help government employees determine which tools could be helpful to procure for their current work, and how they might customize them to their specific needs, a statement from the administration said.

The tools will come primarily from AI companies Anthropic, OpenAI, Google and Meta, Fedscoop reported. OpenAI initially announced a partnership with the federal government last week, saying any federal agencies would be able to use ChatGPT Enterprise for $1 per agency for the next year.

“USAi means more than access — it’s about delivering a competitive advantage to the American people,” said GSA Deputy Administrator Stephen Ehikian, in the statement.

The GSA called the platform a “centralized environment for experimentation,” and said it will track performance and adoption strategies in a dashboard.

The platform’s creation follows Trump’s recently released plan to “accelerate AI innovation” by removing red tape around “onerous” regulations, and get AI into the hands of more workers, including federal employees.

The plan also calls for AI to be more widely adopted in manufacturing, science and in the Department of Defense, and proposes increased funding and regulatory sandboxes — separate trial spaces, like the USAi platform — for development.

A GSA official told FedScoop that before being added to the platform, AI models will be evaluated for safety, like whether a model outputs hate speech, its performance accuracy, and how it was red-teamed, or tested for durability.

But the GSA didn’t say how the introduction of USAi.gov would affect the federal government’s current tech procurement process, FedRAMP. The program, developed with the National Institute of Standards and Technology (NIST), provides a standardized way for government agencies to assess the safety and effectiveness of new tech tools.

“USAi helps the government cut costs, improve efficiency, and deliver better services to the public, while maintaining the trust and security the American people expect,” said GSA Chief Information Officer David Shive in a statement.

This story was originally produced by National, which is part of States Newsroom, a nonprofit news network which includes Wisconsin Examiner, and is supported by grants and a coalition of donors as a 501c(3) public charity.

Trump’s ‘truth seeking’ AI executive order is a complex, expensive policy, experts say

U.S. President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. (Photo by Chip Somodevilla/Getty Images)

U.S. President Donald Trump displays a signed executive order during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. (Photo by Chip Somodevilla/Getty Images)

An executive order signed by President Donald Trump last week seeks to remove “ideological agendas” from artificial intelligence models sold to the federal government, but it’s not exactly clear how the policy would be enforced, nor how tech companies would test their models for these standards, technologists and policy experts say.

The executive order says that AI companies must be “truth-seeking” and “ideologically neutral,” to work with the government, and mentions removing ideologies involving diversity, equity and inclusion from AI models.

“Really what they’re talking about is putting limitations on AI models that produce, in their mind, ideology that they find objectionable,” said Matthew  F. Ferraro, a partner at Crowell & Moring’s privacy and cybersecurity group.

Ferraro said that refinement of AI models and testing them for bias is a goal that gets support on both sides of the aisle. Ferraro spent two years working with the Biden administration on the development of its AI policy, and said that Trump’s AI Action Plan, rolled out last week, shares many pillars in common with the Biden administration’s AI executive order.

Both named the potentials for biohazard threats enabled by AI and the impact on the labor force in their AI action plans, and the importance of growing AI for American innovation, Ferraro said. Biden’s plan prioritized protections against harms, while Trump prioritized innovation above all else.

“It just so happened the Biden administration put those concerns first, not last, right?” Ferraro said. “But I was struck when reading the Trump AI action plan how similar the provisions that made recommendations the Department of Labor were to initiatives under the Biden administration.”

In the executive order, Trump assigns the Office of Management and Budget to issue guidance for compliance in the next 120 days. But Ferraro said as it stands, the order doesn’t provide enough clarity around the technical compliance companies would need to meet, nor how to reach a standard of “truth.”

“It’s not at all clear to me, one, what exactly this executive order seeks to prevent, beyond sort of a gestalt of political speech that the administration found objectionable,” Ferraro said. “Second, it’s not all clear to me, technically, how this is possible. And third, it’s not at all clear to me that even by its own terms, it’s going to be enforceable in any comprehensive way.”

Technical challenges

AI companies looking to contract with the government under the policies in this executive order would likely need to prepare for ongoing testing of their models, said David Acosta, cofounder and chief artificial intelligence officer of ARBOai.

When an AI model is in its early days of training, developers are mostly looking for its technical functionality. But ongoing reviews, or audits, of the model’s growth and accuracy can be done in a process called “red teaming,” where a third party introduces testing scenarios that help with compliance, bias testing and accuracy.

Auditing is a part of what Acosta’s company specializes in, and he said his team has been flooded with requests since last year. The team anticipated there could be a struggle around objectivity in AI.

“This is something that essentially we predicted late last year, as far as where this narrative would come from,” Acosta said. “Who would try to steer it, load it and push it forward, because essentially it’s going to create a culture war.”

Many AI models have a “black box” algorithm, meaning that the decision-making process of the model is unclear and difficult to understand, even for the developers. Because of this, constant training and testing for accuracy is needed to assess if the model is working as intended.

Acosta said that AI models should seek to meet certain accuracy requirements. A model should have a hallucination score — how often a model produces fabricated, inaccurate, or misleading information — below 5%. A model should also have “uncertainty tax,” the ability to tell a user that there’s a level of uncertainty in its output. 

This testing is expensive for AI firms, though, Acosta said. If the federal Office of Management and Budget requires some sort of testing to meet its requirements, it may limit smaller firms from being able to contract with the government.

Third-party red teaming uses thousands of prompts to prod models for hallucinations and politically loaded responses, Acosta said. A well-balanced model will refuse to answer questions using certain ideology or rhetoric, or produce primary sources to validate factual information.

“Explainable AI and reasoning have always, always, always, always been important in order to make sure that we can substantiate what the feedback is,” Acosta said. “But at the end of the day, it always comes down to who it is that’s training these machines, and if they can remain neutral.”

What is “truth-seeking?”

The dataset that an AI model is trained on can determine what “truth” is. Most datasets are based on human-created data, which in itself can contain errors and bias. It’s a system many laws around discrimination in AI are aiming to address. 

“One would hope that there’s a surface level of, not unified — because you’re never going to get a 100% — but a majority agreement on what absolute truth is and what it isn’t,” Acosta said.

He used an example of textbooks from decades ago; much of the information widely known as truth and taught in schools in the 1980s is now outdated. If an AI model is trained on those textbooks, its answers would not stand up to an AI model trained on a much larger, more diverse dataset.

But the removal of “ideological” information like race and sex from AI model training, as outlined in Trump’s executive order, poses a risk for inaccurate results. 

“It’s such a slippery slope, because who decides? Data sanitization – when you’re filtering the training of data to remove ‘overtly ideological sources’ — I mean the risk there is, that in itself, is a subjective process, especially when you’re coming to race, gender and religion,” Acosta said.

The concept of ideological truth is not something the AI algorithm considers, Ferraro said. AI does not produce truth, it produces probabilities.

“They simply produce the most common next token,” based on their training, Ferraro said. “The AI does not know itself what is true or false, or what is, for that matter, ideological or not. And so preventing outputs that expressed [Trump’s] favorite ideological opinions can simply basically exceed the ability of the model builders to build.”

Compliance

Enforcement of this executive order will require clarity from the OMB, said Peter Salib, an assistant professor of law at the University of Houston Law Center.

“It seems like there is one thing happening at the level of rhetoric and then another thing happening at the level of law,” Salib said.

The OMB would need to define the political ideology outlined in the executive order, and what is included or excluded, he said. For example, is a dataset tracking racial wealth inequality over 150 years considered some kind of DEI?

The order also puts companies that wish to comply with the executive order in a position to decide if they want to develop two versions of their AI models — one for the private sector, and one that fits Trump’s standards, Ferraro said.

There are exceptions carved out for some AI systems, including those contracted by national security agencies, the executive order said. Salib also highlighted a section that walks back some of the intent of the order, saying that it will consider vendors compliant as long as they disclose the prompts, specifications, evaluations and other training information for their model to the government. 

“It’s saying you don’t actually have to make your AI ideologically neutral,” Salib said. “You just have to share with the government the system prompts and the specifications you used when training the AI. So the AI can be as woke or as right wing, or as truth seeking or as falsity seeking as you like, and it counts as compliance with the neutrality provision if you just disclose.”

In this case, the executive order could turn out to be a de facto transparency regulation for AI, more so than one that imposes requirements on AI’s ideology, he said.

The rollout of Trump’s AI Action Plan and the executive orders that accompanied it, including one that fast tracks data center permitting and expands AI exports, are the first real look into how the administration intends to handle the quickly growing AI industry.

If the OMB rolls out technological and accuracy-focused requirements for the order, it could be a win for those looking for more transparency in AI, Acosta said. But the office needs to be clear in its requirements and seek true neutrality.

“I think the intent is well with this, but there needs to be much, much, much more refinement moving forward,” Acosta said. 

❌