Reading view

There are new articles available, click to refresh the page.

Data privacy experts call DOGE actions ‘alarming’

White House Senior Advisor to the President, Tesla and SpaceX CEO Elon Musk arrives for a meeting with Senate Republicans at the U.S. Capitol on March 05, 2025 in Washington, DC. Musk is scheduled to meet with Republican lawmakers to coordinate his ongoing federal government cost cutting plan. (Photo by Kevin Dietsch/Getty Images)

White House Senior Advisor to the President, Tesla and SpaceX CEO Elon Musk arrives for a meeting with Senate Republicans at the U.S. Capitol on March 05, 2025 in Washington, DC. Musk is scheduled to meet with Republican lawmakers to coordinate his ongoing federal government cost cutting plan. (Photo by Kevin Dietsch/Getty Images)

While the role and actions of the Elon Musk-headed Department of Government Efficiency remain somewhat murky, data privacy experts have been tracking the group’s moves and documenting potential violations of federal privacy protections.

Before President Donald Trump took office in January, he characterized DOGE as an advisory body, saying it would “provide advice and guidance from outside of government” in partnership with the White House and Office of Management and Budget in order to eliminate fraud and waste from government spending.

But on Inauguration day, Trump’s executive order establishing the group said Musk would have “full and prompt access to all unclassified agency records, software systems and IT systems.”

In the nine weeks since its formation, DOGE has been able to access sensitive information from the Treasury Department payment system, information about the headcount and budget of an intelligence agency and Americans’ Social Security numbers, health information and other demographic data. Musk and department staffers are also using artificial intelligence in their analysis of department cuts.

Though the Trump administration has not provided transparency around what the collected data is being used for, several federal agencies have laid off tens of thousands of workers, under the direction of DOGE, in the past two months. Thousands have been cut from the Environmental Protection Agency, Department of Education, Internal Revenue Service and the Department of Treasury this month.

Frank Torres, senior AI and privacy adviser for The Leadership Conference’s Center for Civil Rights and Technology, which researches the intersection of civil rights and technology, said his organization partnered with the Center for Democracy and Technology, which researches and works with legislators on tech topics, to sort out what DOGE was doing. The organizations published a resource sheet documenting DOGE’s actions, the data privacy violations they are concerned about and the lawsuits that several federal agencies have filed over DOGE’s actions. 

“It doesn’t have to be this way,” Torres said. “I mean, there are processes and procedures and protections in place that are put in place for a reason, and it doesn’t appear that DOGE is following any of that, which is alarming.”

The organizations outlined potential violations of federal privacy protections, like the Privacy Act of 1974, which prohibits the disclosure of information without written consent, and substantive due process under the Fifth Amendment, which protects privacy from government interference.

White House Principal Deputy Press Secretary Harrison Fields would not say if DOGE planned to provide more insight into its plans for the data it is accessing.

“Waste, fraud and abuse have been deeply entrenched in our broken system for far too long,” Fields told States Newsroom in an emailed statement. “It takes direct access to the system to identify and fix it. DOGE will continue to shine a light on the fraud they uncover as the American people deserve to know what their government has been spending their hard earned tax dollars on.”

The lack of transparency concerns U.S. Reps. Gerald E. Connolly, (D-Virginia) and  Jamie Raskin, (D-Maryland), who filed a Freedom of Information Act request this month requesting DOGE provide clear answers about its operations.

The request asks for details on who is in charge at DOGE, the scope of its authority to close federal agencies and lay off federal employees, the extent of its access to sensitive government sensitive databases and for Musk to outline how collected data may benefit his own companies and his foreign customers. They also questioned the feeding of sensitive information into AI systems, which DOGE touted last month.

“DOGE employees, including teenage and twenty-something computer programmers from Mr. Musk’s own companies, have been unleashed on the government’s most sensitive databases — from those containing national security and classified information to those containing the personal financial information of all Americans to those containing the trade secrets and sensitive commercial data of Mr. Musk’s competitors,” the representatives wrote in the request.

Most Americans have indeed submitted data to the federal government which can now be accessed by DOGE, said Elizabeth Laird, the director of equity in civic technology for the Center for Democracy and Technology — whether it be via a tax filing, student loan or Social Security. Laird said the two organizations see huge security concerns with how DOGE is collecting data and what it may be doing with the information. In the first few weeks of its existence, a coder discovered that anyone could access the database that posted updates to the DOGE.gov website.

“We’re talking about Social Security numbers, we’re talking about income, we’re talking about, you know, major life events, like whether you had a baby or got married,” Laird said. “We’re talking about if you’ve ever filed bankruptcy — like very sensitive stuff, and we’re talking about it for tens of millions of people.”

With that level of sensitive information, the business need should justify the level of risk, Laird said.

DOGE’s use of AI to comb through and categorize Americans’ data is concerning to Laird and Torres, as AI algorithms can produce inaccurate responses, pose security risks themselves and can have biases that lead to discrimination against marginalized groups.

While Torres, Laird and their teams plan to continue tracking DOGE’s actions and their potential privacy violations, they published the first resource sheet to start bringing awareness to the information that is already at risk. The data collection they’ve seen so far in an effort to cut federal spending is concerning, but both said they fear Americans’ data could end up being used in ways we don’t yet know about.

“The government has a wealth of data on all of us, and I would say data that’s probably very valuable on the open market,” Torres said. “It’s almost like a dossier on us from birth to death.”

Musk fired back at critics in an interview with Fox News published Thursday.

“They’ll say what we’re doing is somehow unconstitutional or illegal or whatever,” he said. “We’re like, ‘Well, which line of the cost savings do you disagree with?’ And they can’t point to any.”

Facial recognition in policing is getting state-by-state guardrails

Michigan resident Robert Williams was arrested for a crime he didn’t commit because a facial recognition system incorrectly suggested that he was the suspect seen in security camera footage. (Courtesy of the ACLU)

In January 2020, Farmington Hills, Michigan resident Robert Williams spent 30 hours in police custody after an algorithm listed him as a potential match for a suspect in a robbery committed a year and a half earlier.

The city’s police department had sent images from the security footage at the Detroit watch store to Michigan State Police to run through its facial recognition technology. An expired driver’s license photo of Williams in the state police database was a possible match, the technology said.

But Williams wasn’t anywhere near the store on the day of the robbery.

Williams’ case, now a settled lawsuit which was filed in 2021 by the American Civil Liberties Union and Michigan Law School’s Civil Rights Litigation Initiative, was the first public case of wrongful arrest due to misuse of facial recognition technology (FRT) in policing.

But the case does not stand alone. Several more documented cases of false arrests due to FRT have come out of Detroit in the years following Williams’ arrest, and across the country, at least seven people have been falsely arrested after police found a potential match in the depths of FRT databases.

Williams’ lawsuit was the catalyst to changing the way the Detroit Police Department may use the technology, and other wrongful arrest suits and cases are being cited in proposed legislation surrounding the technology. Though it can be hard to legislate technology that gains popularity quickly, privacy advocates say unfettered use is a danger to everyone.

“When police rely on it, rely on them, people’s lives can be turned upside down,” said Nate Wessler, one of the deputy directors of the Speech, Privacy and Technology Project at the national ACLU.

How are police using FRT?

Facial recognition technology has become pervasive in Amercians’ lives, and can be used for small, personal tasks like unlocking a phone, or in larger endeavors, like moving thousands of people through airport security checks.

The technology is built to assess a photo, often called a probe image, against a database of public photos. It uses biometric data like eye scans, facial geometry, or distance between features to assess potential matches. FRT software converts the data into a unique string of numbers, called a faceprint, and will present a set of ranked potential matches from its database of images.

When police use these systems, they are often uploading images from a security camera or body-worn camera. Popular AI company Clearview, which often contracts with police and has developed a version specifically for investigations, says it hosts more than 50 billion facial images from public websites, including social media, mugshots and driver’s license photos.

Katie Kinsey, chief of staff and tech policy counsel for the Policing Project, an organization focused on police accountability, said that she’s almost certain that if you’re an adult in the U.S., your photo is included in Clearview’s database, and is scanned when police are looking for FRT matches.

“You’d have to have no presence on the internet to not be in that database,” she said.

The use of FRT by federal law enforcement agencies goes back as long as the technology has been around, more than two decades, Kinsey said, but local police departments began using it in the last 10 years.

Usually, police are using it in the aftermath of a crime, but civil liberties and privacy concerns are heightened with the idea that the technology could be used to scan faces in real time, with geolocation data attached, she said. Kinsey, who often meets with law enforcement officers to develop best practices and legislative suggestions, said she believes police forces are wary of real-time uses.

Boston Police attempted to use it while searching for the suspects in the 2013 Boston Marathon bombing, for example, but grainy imaging hindered the technology in identifying the culprits, Kinsey said.

Wrongful arrests

FRT’s role in wrongful arrest cases usually come from instances where police have no leads on a crime other than an image captured by security cameras, said Margaret Kovera, a professor of psychology at the John Jay College of Criminal Justice and an eyewitness identification expert.

Before the technology was available, police needed investigative leads to pin down suspects — physical evidence, like a fingerprint, or an eyewitness statement, perhaps. But with access to security cameras and facial recognition technology, police can quickly conjure up several possible suspects that have a high likelihood of a match.

With millions of faces in a database, the pool of potential suspects feels endless. Because the technology finds matches that look so similar to the photo provided, someone choosing a suspect in a photo array can easily make a wrong identification, Kovera said. Without further investigation and traditional police work to connect the match chosen by the technology to a crime scene, the match is useless.

“You’re going to up the number of innocent people who are appearing as suspects and you’re going to decrease the number of guilty people,” Kovera said. “And just that act alone is going to mess up the ratio of positive identifications in terms of how many of them are correct and how many of them are mistaken.”

In the seven known cases of wrongful arrest following FRT matches, police failed to conduct sufficient followup investigation, which could have prevented the incidents. One man in Louisiana spent a week in jail, despite being 40 pounds lighter than a thief allegedly seen in surveillance footage. A woman who was eight months pregnant in Detroit was held in custody for 11 hours after being wrongfully arrested for carjacking, despite no mention of the carjacker appearing pregnant.

When Williams was arrested in January 2020, he was the ninth-best match for the person in the security footage, Michael King, a research scientist with the Florida Institute of Technology’s (FIT) Harris Institute for Assured Information, testified in the ACLU’s lawsuit. And detectives didn’t pursue investigation of his whereabouts before making the arrest.

Detroit police used the expired license image in a photo array presented to a loss-prevention contractor who wasn’t present at the scene of the crime. The loss prevention contractor picked Williams as the best match to the security cameras. Without further investigation of Williams’ whereabouts in October 2018, Detroit Police arrested him and kept him in custody for 30 hours.

The lawsuit says Williams was only informed after several lines of questioning that he was there because of a match via facial recognition technology. As part of the settlement, which Williams reached in the summer of 2024, Detroit Police had to change the way it uses facial recognition technology. The city now observes some of the strictest uses of the technology across the country, which is legislated on a state-by-state basis.

Police can no longer go straight from facial recognition technology results to a witness identification procedure, and they cannot apply for an arrest warrant based solely on the results of a facial recognition technology database, Wessler said. Because there can be errors or biases in the technology, and by its users, guardrails are important to protect against false arrests, he said.

Emerging laws

At the start of 2025, 15 states — Washington, Oregon, Montana, Utah, Colorado, Minnesota, Illinois, Alabama, Virginia, Maryland, New Jersey, Massachusetts, New Hampshire, Vermont and Maine —  had some legislation around facial recognition in policing. Some states, like Montana and Utah, require a warrant for police to use facial recognition, while others, like New Jersey, say that defendants must be notified of its use in investigations.

At least seven more states are considering laws to clarify how and when the technology can be used — lawmakers in Georgia, Hawaii, Kentucky, Massachusetts, Minnesota, New Hampshire and West Virginia have introduced legislation.

Like all AI technologies, facial recognition can have baked-in bias, or produced flawed responses. FRT has historically performed worse on groups of Black faces than on white, and has shown gender differences, too. AI is trained to get better over time, but people seem to think that simply by involving humans in the process, we’ll catch all the problems, Wessler said.

But humans actually tend to have something called “automation bias,” Wessler said — “this hardwired tendency of people to believe a computer output’s right as many times as you tell somebody the algorithm might get it wrong.”

So when police are relying on facial recognition technology as their primary investigative tool, instead of following older law enforcement practices, it’s “particularly insidious” when it goes wrong, Wessler said.

“I often say that this is a technology that is both dangerous when it works and dangerous when it doesn’t work,” Wessler said.

Kinsey said in her work with the Policing Project, she’s found bipartisan support for placing guardrails on police using this technology. Over multiple meetings with privacy advocates, police forces, lawmakers and academics, the Policing Project developed a legislative checklist.

It outlines how police departments could use the technology with transparency, testing and standards strategies, officer training, procedural limits and disclosure to those accused of crimes. It also says legislation should require vendors to disclose documentation about their FRT systems, and that legislation should provide ways to address violations of their use.

The Policing Project also makes similar recommendations for congressional consideration, and while Kinsey said she does believe federal guidelines are important, we may not see federal legislation passed any time soon. In the meantime, we’ll likely continue to see states influencing each other, and recent laws in Maryland and Virginia are an example of a broad approach to regulating FRT across different areas.

Kinsey said that in her meetings with police, they assert that the technologies are essential to crime solving. She said she believes there is space for FRT, and other technologies used by police like license plate readers and security cameras, but that doing so unfettered can do a lot of harm.

“We think some of them can absolutely provide benefits for solving crime, protecting victims,” Kinsey said. “But using those tools, using them according to rules that are public, transparent and have accountability, are not mutually exclusive goals. They can actually happen in concert.”

GET THE MORNING HEADLINES.

TikTok ban poised to disrupt information ecosystem, livelihood of millions of users

Creator protests TikTok ban

Callie Goodwin, a creator from Columbia, South Carolina with nearly 250,000 followers, flew to Washington D.C. last week to advocate against the TikTok ban. (Photo courtesy Callie Goodwin)

As the fate of widely popular short-form video app TikTok hangs in the balance this week, creators, users and social media experts lament the cultural and economic losses U.S. users could experience if the app is banned this weekend.

“So many people are actually using it to make a living,” said Oliver Haimson, an assistant professor of information at the University of Michigan. “People are using it for entertainment, obviously, but also for community and social support and finding information about any number of things.”

The United States Supreme Court has been reviewing the arguments over a law President Joe Biden signed last April, which says that the app poses security concerns, because TikTok’s Chinese owner, ByteDance, is subject to Chinese national security laws that can compel companies to hand over users’ data at any time.

At the time of its signing, the law garnered wide bipartisan support. It said ByteDance must sell TikTok to a non-Chinese owner by Jan. 19, or it will be banned from app stores in the United States.

On Friday, U.S. Supreme Court justices questioned why they should intervene with the law, saying if divested, TikTok is free to pursue the best algorithm for its platform in the United States. TikTok’s lawyers argued that the law’s aim is to censor free speech, and that shutting it down in the U.S. would impact “one of the largest speech platforms in America,” for its roughly 170 million American users.

The creator economy 

But for many creators on TikTok, the issue is not just one of free speech, but also one that affects their livelihood.

Callie Goodwin, a creator from Columbia, South Carolina, flew to Washington D.C. last week to advocate with other TikTokers for overturning the law. Goodwin works a full-time job in marketing and social media, but has also amassed nearly 250,000 followers across two TikTok accounts. She launched pre-stamped greeting card business, Sparks of Joy Co, in the thick of the pandemic in 2020. One TikTok post urging people to shop from her small business instead of big box stores blew up overnight.

“It completely changed the game for me and my business,” Goodwin said. “People really rallied around my business and really loved the fact that we were bringing back the art of handwritten cards, and especially in a season of loneliness.”

Goodwin had been posting about her small business on Meta’s platforms Facebook and Instagram, but it garnered little attention for her. When she started sharing about her products on TikTok and using its e-commerce feature TikTok Shop, her sales boomed. In one nine-day span last year, Sparks of Joy Co netted $30,000 in sales, and Goodwin hired 16 part-time employees. Currently, about 98% of her total sales come from TikTok Shop or from customers discovering her on the app.

For Goodwin, TikTok’s algorithm is what makes the platform special. The discoverability feature drives users toward content it thinks they will like, often pushing small businesses like hers to people who wouldn’t have discovered it otherwise.

TikTok also has a different structure for monetizing videos than other platforms, and its Creator Fund begins paying U.S. enrollees for eyes on their videos after they’ve reached 10,000 followers. Goodwin not only makes money from selling her products, but also on views on her videos for the small business account, and one she made to document her health journey, The Lose it Log.

Goodwin is one of an estimated 27 million creators in the U.S. that make some of their income from social media, according to a 2023 study. Of those, 44%, or around 11.6 million people, said they do social media as their full-time job.

Goodwin says she does post on other platforms, but she doesn’t “get paid a dime” on Meta accounts, even for videos that get millions of views. And brand partnerships, where companies pay creators to review or feature a product, are heavily influenced by followership and engagement numbers, which thrive on TikTok.

“And so you take away TikTok, and we’ve lost all ability to make money when it comes to the content we’re making, the views,” she said. “And some of my friends that do make their full  income on TikTok have left their full-time jobs. They are bracing and prepping.”

On Tuesday, United States Senators Edward J. Markey (D-Mass.), Ron Wyden (D-Ore.), Cory Booker (D-N.J.) and House Rep. Ro Khanna (CA-17) introduced the Extend the TikTok Deadline Act, which aims at delaying the Jan. 19 deadline. Markey said on the Senate floor Monday that the ban was “rushed through without sufficient consideration of the profound consequences it would have on the 170 million Americans who use the platform.”

“Today, TikTok is a space where users share critical resources during emergencies such as the Los Angeles wildfires, earn money to cover groceries and medical care, and build community in challenging times,” he said.

Online communities + vibe

That access to information and connection to those across the country and world is an invaluable part of TikTok, Haimson said. His research focuses on marginalized communities, especially those who identify as a part of the LGBTQ+ community.

The TikTok algorithm has allowed users to find queer TikTok creators who share about their life experiences and identities, and it often helps others learn more about themselves without ever meeting in person, Haimson said. That is how he defines community, he said.

Haimson has previously researched Chinese messaging, social media and payment app WeChat, which President Donald Trump proposed to ban in his first term. The ban was then dropped by President Biden in 2021.

“It’s an entire infrastructure that people use for payment, for work, for community, for support, all of these things. And I think we can kind of think about TikTok as being similar,” Haimson said.

Casey Fiesler, an associate professor in the University of Colorado Boulder’s College of Media, Communications and Information, makes educational TikTok content related to technology ethics and policy under the username ProfessorCasey. She aims to make these topics more accessible, and reach people beyond academia on topics like artificial intelligence. Fiesler said TikTok’s algorithm has led her to “a few really nice communities” of other creators, academics, scientists and educators.

“TikTok’s recommender system also has a real knack for pushing my content to people who are interested in it,” Fiesler said. “Which is great both because it gets to the right people and because it gets to less of the wrong people.”

In addition to doing a better job connecting her with creators and viewers she wants to know, her comment section is “a lot more pleasant” on TikTok than other places, she said. She does currently post on Instagram, and thinks she’ll move her content there if TikTok is no longer an option. But she’s wary of this week’s announced changes to Meta’s content rules, which include getting rid of its third-party fact checking program and allowing more kinds of free speech on topics “frequently subject to political debate.”

“It is hard to say how viable an option that platform will be moving forward,” she said.

TikTok’s arrival to the social landscape had “major impacts” to how platforms serve up users’ content, said Paige Knapp, Los Angeles-based founder and CEO of social media agency Kylee Social LLC. Even the smartest versions of previous social platforms were based around users actively following people or things that they liked, and showing them more of that content, Knapp said.

With TikTok, “for the very first time, a user could download the app, sign up and never follow an account, ever, and still have a highly curated ‘For You Page,’” Knapp said. “The average user was getting content from spaces that they may have not ventured previously, but the algorithm was saying, hey, based on your signals — the data signals you send us — we think you want to, you know, see these other things too.”

Knapp said TikTok has provided young people a way to tap into cultural moments and world events unlike ever before. While many millennials may have watched news coverage or read about the Arab Spring in the early 2010s, today’s young generations can watch world events, like the war in Gaza, playing out in real time on video.

“You sort of have a lens into other people’s worlds, whether that’s a creator, or an average person sharing their perspective on something, or a view into what’s happening in Palestine,” Knapp said. “And I think that’s a part of why there’s a big uproar about losing it for a number of reasons.”

What happens next?

There’s a chance that the Supreme Court could rule to not uphold the ban this week, or rule that there should be more time for ByteDance to consider divesting TikTok.

If the ban does go into effect on Jan 19., TikTok is planning to shut off its app for U.S. users that day, The Information reported Wednesday. A source told the outlet that users in the U.S. will encounter a pop-up message about the ban if they open the app.

Some users might try to look for work-arounds, like using a virtual private network, or VPN, set to a location outside the U.S. to continue using the app, Haimson said. But for those worried about security, these options may make it worse. If you can’t update your app or have accurate location settings, the security patches included in software updates won’t be applied to your account, and you may be vulnerable.

“But I don’t think people are going to give it up easily,” Haimson said.

Other social media sites may experience widespread “platform migration,” Fiesler said. People have been moving from one online platform to another for decades, she said — think about the rise and fall in popularity of MySpace or Tumblr in the early and mid-2000s.

Usually that migration is done by choice, Fiesler said, like users leaving X, formerly known as Twitter. The day after the 2024 election, X experienced the most deactivations since Elon Musk took ownership in 2022, and rival platform Bluesky reported 1 million new users the same week. But if TikTok is banned, there will probably be a period of users testing other apps for substitutions.

“What often happens instead is fragmentation, where people land on a lot of different platforms,” Fiesler said. “I think that is likely what would happen if TikTok users have to find a new home — they will find a lot of new homes.”

In the last few days, Chinese social media app Xiaohongshu, or RedNote, became the No. 1 downloaded free app in Apple’s U.S. app store, and Lemon8, which is also owned by ByteDance, also saw an increase of downloads. Meta’s Reels, or Youtube’s Shorts offer users a similar way to post short-form content, though TikTok users say its comment structure, discoverability and algorithm are quite different.

If TikTok shuts down in the U.S. on Sunday, it may take a while for people to fill the voids of entertainment, information, economic opportunity and connection left in TikTok’s wake, Knapp said. She said she feels no other app matches the welcoming feeling, camaraderie and tone of TikTok.

“It kind of reminds me of the ambience at a restaurant,” Knapp said. “You can recreate the menu, as it were, for Tiktok — the idea of a discoverable algorithm that you know serves recommended content. But if that ambiance is not right, then users are not going to spend a ton of time there.”

The potential loss of TikTok is one Knapp said she feels personally, and she knows her creator clients and the companies that spend money on sponsored content and brand partnerships will feel, too. The impact will certainly also be felt by regular, everyday people who found new hobbies, learned new skills and had their minds opened to ideas and concepts they wouldn’t have otherwise considered, Knapp said.

“[TikTok] gave rise to these really unique perspectives,” Knapp said. “And just a diverse roster of folks who ended up capturing voice and kind of being able to not just … entertain and things, but also really make an impact with the following that they cultivated.”

GET THE MORNING HEADLINES.

❌