Normal view

There are new articles available, click to refresh the page.
Yesterday — 15 April 2025Main stream

AI holds promise in scientific research, but can’t substitute for humans, experts say

14 April 2025 at 10:30
Jennifer Kang-Mieler, right, and PhD student Chaimae Gouya review images they’re using to train an AI model to detect an eye disorder in premature infants. (Photo by Ashley Muliawan/Stevens Institute of Technology)

Jennifer Kang-Mieler, right, and PhD student Chaimae Gouya review images they’re using to train an AI model to detect an eye disorder in premature infants. (Photo by Ashley Muliawan/Stevens Institute of Technology)

With the Trump administration making sweeping cuts to staff and research grants at science-related agencies, artificial intelligence could offer a tempting way to keep labs going, but scientists say there are limits to the technology’s uses.

The Trump-appointed leaders of The National Institutes of Health, U.S. Centers for Disease Control and the Department of Health and Human Services have moved to cut thousands of jobs and billions in federal grants that fund university research and laboratory needs in the last few months.

The federal government may be eyeing artificial intelligence to bridge a gap created by these cuts. In February, the U.S. Department of Energy’s national labs partnered with AI companies OpenAI and Anthropic for an “AI Jam Session,” a day for 1,000 scientists across various disciplines to test the companies’ AI models and share feedback. Some figures in Trump’s cabinet have suggested that artificial intelligence models may be a good substitute for human physicians.

But scientists and builders of AI say it’s not that simple.

AI is playing a major role in scientific discovery — last year’s Nobel Prize in Chemistry was awarded to three scientists for discoveries that used AI to predict the shape of proteins and to invent new ones.

But we aren’t looking at a future where we can substitute researchers and doctors with algorithms, said Jennifer Kang-Mieler, department chair and professor of biomedical engineering at Stevens Institute of Technology in New Jersey.

“It’s a tool they may use to enhance clinical decision-making,” she said. “But I think that clinical expertise is not going to be something that we can completely match with AI.”

Kang-Mieler and other researchers say AI has its limitations, but is playing an increasingly important role in analyzing data, speeding up lab work, assisting in diagnostics, making personalized treatment plans and in cutting some costs related to research.

AI uses in scientific labs and healthcare

Artificial intelligence technologies have been a part of some healthcare and laboratory settings, like image recognition in radiology, for at least a decade, said Bradley Bostic, chairman and CEO of healthcare software company hc1, based in Indiana. But Bostic said the industry is still early in exploring its uses.

“It feels to me similar to 1999, with the World Wide Web,” Bostic said. “We didn’t realize how early days it was. I think that’s where we are right now with AI and specifically in healthcare.”

While AI’s potential is nearly endless, AI’s best uses in scientific and healthcare settings are for tasks that are repetitive and operational, Bostic said. Ideally, AI makes processes more efficient, and frees up humans’ time to focus on more important tasks.

Stephen Wong, the John S. Dunn Presidential Distinguished Chair in Biomedical Engineering at Houston Methodist uses machine learning, deep learning, and large language models every day in his lab, which researches cost-effective strategies for disease management.

He said he uses AI models for image analysis, medical imaging, processing massive datasets in genomics, the study of proteins, known as proteomics, and drug screening, as well as sifting through existing research and lab data. His goal is to cut down on tedious tasks, and make sense of large-scale data.

“Even tasks like locating crucial information buried in lab notebooks, scientific publications and patents become far more efficient,” he said.

Efficiency is also the goal of Kang-Mieler’s research, which was funded last fall by an NIH grant. Kang-Mieler and colleague Yu Gan are developing an AI-powered diagnostic tool for retinopathy of prematurity (ROP) — an eye disorder and loss of vision — in premature infants.

There was a lack of quality images for AI models to train on, Kang-Mieler said, so they are  using images of animal eyes that feature ROP, to create “synthetic” images of what the condition would look like in humans. The neural networks in the AI model will learn how to categorize those synthetic images, and eventually assist eye doctors in spotting ROP. Before AI tools, this process would have been done by the human eye, and take much longer, Kang-Mieler said.

“The way I saw it was also that if we can be really successful in developing and doing this, we can actually take this into other types of diseases, rare diseases, that are hard to diagnose,” she said.

Automation and human capital

Many scientific labs require a lot of physical tasks, like handling liquids, following steps at specific times and sometimes handling hazardous materials. With AI algorithms and hardware, much of that work can be done without humans physically present, researchers at the University of North Carolina are finding.

Ron Alterovitz, the Lawrence Grossberg Distinguished Professor in the Department of Computer Science, has worked with Jim Cahoon, chair of the Department of Chemistry, on an approach to make lab work more autonomous. The pair have studied how an AI model could instruct an autonomous robot to execute lab processes, and then how AI models could analyze experiment results into findings. Alterovitz called it a “make and test” model.

“So once people can set it in motion, the AI comes up with a design, the robotic automation will make and test it, and the AI will analyze the results and come up with a new design,” he said. “And this whole loop can essentially run autonomously.”

The pair published their findings last fall, saying there are several levels of automation a lab could deploy, from assistive automation, where tasks like liquid handling are automated and humans do everything else, all the way up to the fully automated loop Alterovitz described.

Alterovitz sees many benefits to automated labs. Robots offer a safer method of handling hazardous materials, and allow researchers to conduct experiments 24 hours a day, instead of just when lab techs are clocked in. The robots also provide high accuracy and precision, and can replicate experiments easily, he said.

“If you ask two different people to do the same synthesis process, there’ll be subtle differences in how they do some of the details that can lead to some variance in the results sometimes,” Alterovitz  said. “With robots, it’s just done the same way every time, very repeatedly.”

While there are fears that AI and automation will cut jobs in science, Alterovitz said it allows humans to do higher-level tasks. Many labs are already facing a shortage of trained technicians who do a majority of the physical tasks involved. 

AI-assisted labs will likely heighten the need for other types of jobs, like data scientists, AI specialists and interdisciplinary experts who can bridge technology with real-world scientific applications, Wong said.

In order to continue innovating and learning new things, labs will still need the “chemical intuition” and problem-solving skills that trained scientists have, Alterovitz said.

AI’s limitations

Kang-Mieler says that AI’s current limitations are a factor that keeps the industry from rushing to apply the technology to everything. AI models are only as good as the data sets they’re trained on, and can contain data bias, or incomplete information that won’t paint a full picture.

And AI models can’t do an essential function of researchers, Kang-Mieler said — discover new information.

“I suppose that AI models can help formulate new hypotheses, but I don’t think that capability is the same as discovery,” Kang-Mieler said. “Current AI models are not developed to make independent discoveries or have original thoughts.”

Bostic has built other technology companies in his career, but said the stakes in scientific research and healthcare are much higher. Inaccurate data in an AI model could lead to a missed diagnosis or another huge problem for a patient. He said the best approach is what he calls “reinforcement learning through human feedback.”

“This is where you don’t have models that are just running independent of people,” Bostic said. “You have the models that are complementing the people and actually being informed by the people.”

Bostic said as the tech industry evolves, AI will play a role in shortening drug trials, providing patients more specialized care and helping research teams make due with fewer skilled workers, he said. But it’s not a fix-all, set-it-and-forget-it solution.

“I don’t see a scenario where clinical decisions are being independently made by machines and there aren’t the experts — who are trained and seeing the total picture of what’s going on with the patient — involved with those decisions anytime soon,” he said. 

Before yesterdayMain stream

As demand for AI rises, so do power thirsty data centers

13 April 2025 at 23:00
A complex of data centers in Ashburn, Va. (Photo by Gerville/Getty Images)

A complex of data centers in Ashburn, Va. (Photo by Gerville/Getty Images)

This is the first of two States Newsroom stories examining the implications of the growing need for electricity largely from artificial intelligence and data centers. Read the second here.

The next time you’re on a Zoom meeting or asking ChatGPT a question, picture this: The information zips instantaneously through a room of hot, humming servers, traveling hundreds, possibly thousands of miles, before it makes its way back to you in just a second or two.

It can be hard to wrap your mind around, said Vijay Gadepally, a senior scientist at Massachusetts Institute of Technology’s Lincoln Laboratory, but large data centers are where nearly all artificial intelligence systems and computing happens today.

“Each one of these AI models has to sit on a server somewhere, and they tend to be very, very big,” he said. “So if your millions or billions of users are talking to the system simultaneously, the computing systems have to really grow and grow and grow.”

As the United States works to be a global AI superpower, it’s become a home to hundreds of data centers — buildings that store and maintain the physical equipment needed to compute information.

For users of the new and increasingly popular AI tools, it might seem like the changes have been all online, without a physical footprint. But the rise of AI has tangible effects — data centers and the physical infrastructure needed to run them use large amounts of energy, water and other resources, experts say.

“We definitely try to think about the climate side of it with a critical eye,” said Jennifer Brandon, a science and sustainability consultant. “All of a sudden, it’s adding so much strain on the grid to some of these places.”

The rise of data centers

As society traded large, desktop computers for sleek laptops, and internet infrastructure began supporting AI models and other software tools, the U.S. has built the physical infrastructure to support growing computing power.

Large language models (LLMs) and machine learning (ML) technologies — the foundation of most modern AI tools — have been used by technologists for decades, but only in the last five to seven years have they become commercialized and used by the general public, said David Acosta, cofounder and chief artificial intelligence officer of ARBOai.

To train and process information, these fast-learning AI models require graphic processing units (GPUs), servers, storage, cabling and other networking equipment, all housed in data centers across the country. Computers have been storing and processing data off-site in dedicated centers for decades, but the dot-com bubble in the early 2000s and the move to cloud storage demanded much more storage capacity over the last decade.

As more things moved online, and computing hardware and chip technology supported faster processing, AI models became attainable to the general public, Acosta said. Current AI models use thousands of GPUs to operate, and training a single chatbot like ChatGPT uses about the same amount of energy as 100 homes over the course of a year.

“And then you multiply that times the thousands of models that are being trained,” Acosta said. “It’s pretty intense.”

The United States is currently home to more than 3,600 data centers, but about 80% of them are concentrated in 15 states, Data Center Map shows. The market has doubled since 2020, Forbes reported, with 21% year over year growth. For many years, nearly all of the country’s data centers were housed in Virginia, and the state is considered a global hub with nearly 70% of the world’s internet traffic flowing through its nearly 600 centers. Texas and California follow Virginia, with 336 and 307 centers, respectively.

Tech companies that require large amounts of computing power, the private equity firms and banks that invest in them and other real estate or specialized firms are the primary funders of data centers. In September, BlackRock, Global Infrastructure Partners, Microsoft and AI investment fund MGX invested $30 billion into new and expanded data centers primarily in the U.S, and said they will seek $100 billion in total investment, including debt financing.

Investment in American data center infrastructure is encouraging considering the global “AI arms race,” we’re in, Acosta said.

“If you own the data, you have the power,” Acosta said. “I just think we just make sure we do it ethically and as preemptive as possible.”

Foes and friends of nuclear power face off near Pa.’s Three Mile Island

The shuttered Three Mile Island nuclear power plant stands in the middle of the Susquehanna River on October 10, 2024 near Middletown, Pennsylvania. The plant’s owner, Constellation Energy, plans to spend $1.6 billion to refurbish the reactor that it closed five years ago and restart it by 2028 after Microsoft recently agreed to buy as much electricity as the plant can produce for the next 20 years to power its growing fleet of data centers. (Photo by Chip Somodevilla/Getty Images)

Energy and environmental impact

Current estimates say data centers are responsible for about 2% of the U.S.’ energy demand, but Anthony DeOrsey, a research manager at sustainable energy research firm Cleantech group, projects data centers will be about 10% of demand by 2027.

As data centers are developed in new communities across the country, residents and their state legislators see a mix of financial benefits with energy and environmental challenges.

The development of data centers brings some infrastructure jobs to an area, and in busy data center communities, like Virginia’s Loudoun and Prince William counties, centers can generate millions in tax revenue, the Virginia Mercury reported.

Local governments can be eager to strike deals with the tech companies or private equity firms seeking to build, but the availability and cost of power is a primary concern. New large data centers require the electricity equivalent of about 750,000 homes, a February report from sustainability consultancy firm BSI and real estate services firm CBRE.

Under many state’s utilities structures, local residents can be  subjected to electric price increases to meet big electric needs of data centers. Some legislators, like Georgia State Sen. Chuck Hufstetler, have sought to protect residential and commercial customers from getting hit with higher utility bills.

Granville Martin, an Eastern Shore, Connecticut-based lawyer with expertise in finance and environmental regulation, said the same problem has come up in his own community.

“The argument was, the locals didn’t want this data center coming in there and sucking up a bunch of the available power because their view — rightly or wrongly, and I think rightly — was well, that’s just going to raise our rates,” Martin said.

Some states are exploring alternative energy sources. In Pennsylvania, Constellation Energy made a deal to restart its nuclear power plant at Three Mile Island to provide carbon-free electricity to offset Microsoft’s power usage at its nearby data centers.

But climate experts have concerns about data centers outside of their power demand.

“The general public is largely unaware that cooling industrial facilities, whatever they might be, is actually a really, really important aspect of their function,” Martin said.

The equipment in data centers, many of which run 24/7, generate a lot of heat. To regulate temperature, most pump water through tubing surrounding the IT equipment, and use air conditioning systems to keep those structures cool. About 40% of data center’s energy consumption is used for cooling, the Cleantech group found.

Some have a closed-loop system, recycling grey water through the same system, but many use fresh drinking water. The amount of water and energy used in cooling is enormous, Brandon, the sustainability consultant. said.

“The current amount of AI data centers we have takes six times the amount of water as the country of Denmark,” she said. “And then we are using the same amount of energy as Japan, which is the fifth largest energy user in the world, for data centers right now.”

Radium Cloud's

 

Radium Cloud’s newest data center in Raleigh, North Carolina. Photo courtesy of Vijay Gadepally.

Is there a sustainable future for data centers?

Energy is now a material issue to running an AI company, DeOrsey said, and unrestrained, quickly evolving AI models are very expensive to train and operate. DeOrsey pointed to Chinese AI company DeepSeek, which released its attempt at a cost-conscious, energy efficient large language model, R1, in January.

The company claims it trained the model on 2,000 chips, much fewer than competitors like Open AI, ChatGPT’s parent company, and Google, which use about 16,000 chips. It’s not yet clear if the model lives up to its claims of energy efficiency in use, but it’s a sign that companies are feeling the pressure to be more efficient, DeOrsey said.

“I think companies like DeepSeek are an example of companies doing constrained optimization,” he said. “They’re assuming they won’t just get all the power they need, they won’t be able to get all of the chips they need, and just make do with what they have.”

For Gadepally, who is also chief tech officer of AI company Radium Cloud, this selective optimization is a tool he hopes more companies begin using. His recent work at MIT’s Lincoln Laboratory Supercomputing Center focused on the lab’s own data center consumption. When they realized how hot their equipment was getting, they did an audit.

Gadepally said simple switches like using cheaper, less-robust AI models cut down on their energy use. Using AI models at off-peak times saved money, as did “power capping” or limiting the amount of power feeding their computer processors. The difference was nominal — you may wait a second or two more to get an answer back from a chatbot, for example.

With Northeastern University, MIT built software called Clover that watches carbon intensity for peak periods and makes adjustments, like automatically using a lower-quality AI model with less computing power when energy demand is high.

“We’ve been kind of pushing back on people for a long time saying, is it really worth it?” Gadepally said. “You might get a better, you know, knock-knock joke from this chatbot. But that’s now using 10 times the power than it was doing before. Is that worth it?”

Gadepally and Acosta both spoke about localizing AI tools as another energy and cost saving strategy for companies and data centers. In practice, that means building tools to do exactly what you need them to do, and nothing more, and hosting them on local servers that don’t need to send their computing out potentially hundreds of miles away to the nearest data center.

Health care and agricultural settings are a great example, Acosta said, where tools can be built to serve these specialized settings rather than processing their data at “bloated, over-fluffed”  large data centers.

Neither AI developer sees any slowdown in the demand for AI and processing capabilities of data centers. But Gadepally said environmental and energy concerns will come to a head for tech companies when they realize they could save money by saving energy, too. Whether DeepSeek finds the same success as some of its American competitors is yet to be seen, Gadepally said, but it will probably make them question their practices.

“It will at least make people question before someone says, ‘I need a billion dollars to buy new infrastructure,’ or ‘I need to spend a billion dollars on computing next month,” Gadepally said. “Now they may say, ‘did you try to optimize it?’”

Data privacy experts call DOGE actions ‘alarming’

31 March 2025 at 09:07
White House Senior Advisor to the President, Tesla and SpaceX CEO Elon Musk arrives for a meeting with Senate Republicans at the U.S. Capitol on March 05, 2025 in Washington, DC. Musk is scheduled to meet with Republican lawmakers to coordinate his ongoing federal government cost cutting plan. (Photo by Kevin Dietsch/Getty Images)

White House Senior Advisor to the President, Tesla and SpaceX CEO Elon Musk arrives for a meeting with Senate Republicans at the U.S. Capitol on March 05, 2025 in Washington, DC. Musk is scheduled to meet with Republican lawmakers to coordinate his ongoing federal government cost cutting plan. (Photo by Kevin Dietsch/Getty Images)

While the role and actions of the Elon Musk-headed Department of Government Efficiency remain somewhat murky, data privacy experts have been tracking the group’s moves and documenting potential violations of federal privacy protections.

Before President Donald Trump took office in January, he characterized DOGE as an advisory body, saying it would “provide advice and guidance from outside of government” in partnership with the White House and Office of Management and Budget in order to eliminate fraud and waste from government spending.

But on Inauguration day, Trump’s executive order establishing the group said Musk would have “full and prompt access to all unclassified agency records, software systems and IT systems.”

In the nine weeks since its formation, DOGE has been able to access sensitive information from the Treasury Department payment system, information about the headcount and budget of an intelligence agency and Americans’ Social Security numbers, health information and other demographic data. Musk and department staffers are also using artificial intelligence in their analysis of department cuts.

Though the Trump administration has not provided transparency around what the collected data is being used for, several federal agencies have laid off tens of thousands of workers, under the direction of DOGE, in the past two months. Thousands have been cut from the Environmental Protection Agency, Department of Education, Internal Revenue Service and the Department of Treasury this month.

Frank Torres, senior AI and privacy adviser for The Leadership Conference’s Center for Civil Rights and Technology, which researches the intersection of civil rights and technology, said his organization partnered with the Center for Democracy and Technology, which researches and works with legislators on tech topics, to sort out what DOGE was doing. The organizations published a resource sheet documenting DOGE’s actions, the data privacy violations they are concerned about and the lawsuits that several federal agencies have filed over DOGE’s actions. 

“It doesn’t have to be this way,” Torres said. “I mean, there are processes and procedures and protections in place that are put in place for a reason, and it doesn’t appear that DOGE is following any of that, which is alarming.”

The organizations outlined potential violations of federal privacy protections, like the Privacy Act of 1974, which prohibits the disclosure of information without written consent, and substantive due process under the Fifth Amendment, which protects privacy from government interference.

White House Principal Deputy Press Secretary Harrison Fields would not say if DOGE planned to provide more insight into its plans for the data it is accessing.

“Waste, fraud and abuse have been deeply entrenched in our broken system for far too long,” Fields told States Newsroom in an emailed statement. “It takes direct access to the system to identify and fix it. DOGE will continue to shine a light on the fraud they uncover as the American people deserve to know what their government has been spending their hard earned tax dollars on.”

The lack of transparency concerns U.S. Reps. Gerald E. Connolly, (D-Virginia) and  Jamie Raskin, (D-Maryland), who filed a Freedom of Information Act request this month requesting DOGE provide clear answers about its operations.

The request asks for details on who is in charge at DOGE, the scope of its authority to close federal agencies and lay off federal employees, the extent of its access to sensitive government sensitive databases and for Musk to outline how collected data may benefit his own companies and his foreign customers. They also questioned the feeding of sensitive information into AI systems, which DOGE touted last month.

“DOGE employees, including teenage and twenty-something computer programmers from Mr. Musk’s own companies, have been unleashed on the government’s most sensitive databases — from those containing national security and classified information to those containing the personal financial information of all Americans to those containing the trade secrets and sensitive commercial data of Mr. Musk’s competitors,” the representatives wrote in the request.

Most Americans have indeed submitted data to the federal government which can now be accessed by DOGE, said Elizabeth Laird, the director of equity in civic technology for the Center for Democracy and Technology — whether it be via a tax filing, student loan or Social Security. Laird said the two organizations see huge security concerns with how DOGE is collecting data and what it may be doing with the information. In the first few weeks of its existence, a coder discovered that anyone could access the database that posted updates to the DOGE.gov website.

“We’re talking about Social Security numbers, we’re talking about income, we’re talking about, you know, major life events, like whether you had a baby or got married,” Laird said. “We’re talking about if you’ve ever filed bankruptcy — like very sensitive stuff, and we’re talking about it for tens of millions of people.”

With that level of sensitive information, the business need should justify the level of risk, Laird said.

DOGE’s use of AI to comb through and categorize Americans’ data is concerning to Laird and Torres, as AI algorithms can produce inaccurate responses, pose security risks themselves and can have biases that lead to discrimination against marginalized groups.

While Torres, Laird and their teams plan to continue tracking DOGE’s actions and their potential privacy violations, they published the first resource sheet to start bringing awareness to the information that is already at risk. The data collection they’ve seen so far in an effort to cut federal spending is concerning, but both said they fear Americans’ data could end up being used in ways we don’t yet know about.

“The government has a wealth of data on all of us, and I would say data that’s probably very valuable on the open market,” Torres said. “It’s almost like a dossier on us from birth to death.”

Musk fired back at critics in an interview with Fox News published Thursday.

“They’ll say what we’re doing is somehow unconstitutional or illegal or whatever,” he said. “We’re like, ‘Well, which line of the cost savings do you disagree with?’ And they can’t point to any.”

❌
❌