Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

College students ‘cautiously curious’ about AI, despite mixed messages from schools, employers

16 December 2024 at 11:30

University of Utah student Rebeca Damico said her professors at first took a hard line against AI when ChatGPT was introduced in 2022, but she and other students say schools have softened their stands as the usefulness – and career potential – of the technology has become clearer. (Photo by Spenser Heaps for States Newsroom)

For 21-year-old Rebeca Damico, ChatGPT’s public release in 2022 during her sophomore year of college at the University of Utah felt like navigating a minefield.

The public relations student, now readying to graduate in the spring, said her professors immediately added policies to their syllabuses banning use of the chatbot, calling the generative artificial intelligence tool a form of plagiarism.

“For me, as someone who follows the rules, I was very scared,” Damico said. “I was like, oh, I can’t, you know, even think about using it, because they’ll know.”

Salt Lake City-based Damico studied journalism before switching her major to public relations, and saw ChatGPT and tools like it as a real threat to the writing industry. She also felt very aware of the “temptation” she and her classmates now had — suddenly a term paper that might take you all night to write could be done in a few minutes with the help of AI.

“I know people that started using it and would use it to … write their entire essays. I know people that got caught. I know people that didn’t,” Damico said. “Especially in these last couple weeks of the semester, it’s so easy to be like, ‘Oh, put it into ChatGPT,’ but then we’re like, if we do it once, it’s kind of like, this slippery slope.”

But students say they’re getting mixed messages – the stern warning from professors against use of AI and the growing pressure from the job market to learn how to master it.

The technological developments of generative AI over the last few years have cracked open a new industry, and a wealth of job opportunities. In California, Gov. Gavin Newsom recently announced the first statewide partnership with a tech firm to bring AI curriculum, resources and opportunities to the state’s public colleges.

And even for those students not going into an IT role, it’s likely they will be asked to use AI in some way in their industries. Recent research from the World Economic Forum’s 2024 Work Trend Index Annual Report found that 75% of people in the workforce are using AI at work, and that some hiring managers are equally prioritizing AI skills with real-world job experience.

Higher ed’s view of AI

Over the last few years, the University of Utah, like most academic institutions, has had to take a position on AI. As Damico experienced, the university added AI guidelines to its student handbook that take a fairly hard stance against the tools.

It urges professors to add additional AI detection tools in addition to education platform Canvas’ Turnitin feature, which scans assignments for plagiarism. The guidelines also now define the use of AI tools without citation, documentation or authorization as forms of cheating.

Though Damico said some professors continue to hold a hard line against AI, some have started to embrace it. The case-by-case basis Damico describes from her professors is in line with how many academic institutions are handling the technology.

Some universities spell out college-wide rules, while others leave it up to professors themselves to set AI standards in their classrooms. Others, like Stanford University’s policy, acknowledge that students are likely to interact with it.

Stanford bans AI from being used to “substantially complete an assignment or exam,” and says students must disclose its use, but says “absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person.”

Virginia Byrne is an associate professor of higher education and student affairs at Morgan State University in Baltimore, and she studies technology in the lives of learners and educators, with a focus on how it impacts college students. She said the university allows professors to figure out what works best for them when it comes to AI. She herself often assigns projects that prompt students to investigate the strengths and weaknesses of popular AI tools.

She’s also a researcher with the TRAILS Institute, an multi-institution organization aiming to understand what trust in AI looks like, and how to create ethical, sustainable AI solutions. Along with Morgan State, researchers from University of Maryland, George Washington University and Cornell University conduct a variety of research, such as how ChatGPT can be used in health decision making, how to create watermark technology for AI or how other countries are shaping AI policy.

“It’s cool to be in a space with people doing research that’s related, but so different,” Byrne said. “Because it expands your thinking, and it allows us to bring graduate students and undergraduate students into this community where everyone is focused on trustworthiness and AI, but from so many different lenses.”

Byrne hopes that her students can see the potential that AI has to make their lives and work more easy, but she worries that it creates an “artificial expectation” for how young people need to perform online.

“It might lead some folks, younger folks, who are just starting their careers, to feel like they need to use (social media tool) Canva to look totally perfect on LinkedIn, and use all these tools to … optimize their time and their calendars,” Byrne said. “And I just worry that it’s creating a false expectation of speed and efficiency that the tools currently can’t accomplish.”

Theresa Fesinstine is the founder of peoplepower.ai, which trains HR professionals on ways AI can be used efficiently within their organization. This semester, she instructed her first college course at the City University of New York on AI and business, and taught students of all years and backgrounds.

Fesinstine said she was surprised how many of her students knew little to nothing about AI, but heard that many other instructors warned they’d fail students who were found to have used it in assignments. She thinks this mixed messaging often comes from not understanding the technology, and its abilities to help with an outline, or to find research resources.

“It’s a little scary, and I think that’s where, right now, most of the trepidation is centered around,” she said. “It’s that most people, in my opinion, haven’t been trained or understand how to use AI most effectively, meaning they use it in the same way that you would use Google.”

Real-world applications

Shriya Boppana, a 25-year-old MBA student at Duke University, not only uses AI in her day-to-day life for schoolwork, but she’s also pursuing a career in generative AI development and acquisitions. She wasn’t initially interested in AI, she said, but she worked on a project with Google and realized how the technology was set to influence everyday life, and how malleable it still is.

“Once you kind of realize how much that the tech actually isn’t as fleshed out as you think it is, I was a little more interested in … trying to understand what the path is to get it where it needs to go,” Boppana said.

She said she uses some form of AI tool every day, from planning her own schedule, to having a chatbot help decide how students in a group project should divide and complete work, based on their availability. Because she works with it regularly, she understands the strengths and limitations of AI, saying it helps her get mundane tasks done, process data or outline an assignment.

But she said the personalized tone she aims to have in her writing just isn’t there yet with the publicly available AI tools, so she doesn’t completely rely on it for papers or correspondence.

Parris Haynes, a 22-year-old junior studying philosophy at Morgan State, said the structure and high demand of some students’ coursework almost “encourages or incentivizes” them to use AI to help get it all done.

He sees himself either going into law, or academia and said he’s a little nervous about how AI is changing those industries. Though he leans on AI to help organize thoughts or assignments for classes like chemistry, Haynes said he wouldn’t go near it when it comes to his work or career-related objectives for his philosophy classes.

“I don’t really see much of a space for AI to relieve me of the burden of any academic assignments or potential career tasks in regards to philosophy,” Haynes said. “Even if it could write a convincing human-seeming paper, a philosophical paper, it’s robbing me of the joy of doing it.”

Gen Z’s outlook on their future with AI 

Like Haynes, Fesinstine knows that some of her students are interested, but a little scared about the power AI may have over their futures. Although there’s a lot of research about how older generations’ jobs are impacted by AI, those just about to break into the workforce may be the most affected, because they’ve grown up with these technologies.

“I would say the attitude is — I use this term a lot, ‘cautiously curious,’” Fesinstine said.  “You know, there’s definitely a vibe around ethics and protection that I don’t know that I would see in other generations, perhaps … But there’s also an acknowledgement that this is something that a lot of companies are going to need and are going to want to use.”

Now, two years since ChatGPT’s release, Damico has started to realize the ways generative AI is useful in the workplace. She began working with PR firm Kronus Communications earlier this year, and was encouraged to explore some time-saving or brainstorming functions of generative AI.

She’s become a fan of having ChatGPT explain new business concepts to her, or to get it to suggest Instagram captions. She also likes to use it for more refined answers than Google might provide, such as if she’s searching for publications to pitch a client to.

Though she’s still cautious, and won’t use generative AI to write actual assignments for her, Damico said she realizes she needs the knowledge and experience after graduation — “it gives you kind of this edge.”

Boppana, who sees her career growing in the AI space, feels incredibly optimistic about the role AI will play in her future. She knows she’s more knowledgeable and prepared to go into an AI-centered workforce than most, but she feels like the opportunities for growth in healthcare, telecommunications, computing and more are worth wading into uncertain waters.

“I think it’s like a beautiful opportunity for people to learn how machines just interact with the human world, and how we can, I don’t know, make, like, prosthetic limbs, like test artificial hearts … find hearing aids,” Boppana said. “There’s so much beauty in the way that AI helps human beings. I think you just have to find your space within it.”

GET THE MORNING HEADLINES.

Biden administration leaves ‘foundational’ tech legacy, technologists say

29 November 2024 at 11:30

Tech insiders say Biden is leaving a strong foundation for high-tech industry, boosting broadband access, setting a foundation for AI regulation, and encouraging chip manufacturing. (Rebecca Noble | Getty Images)

As he’s poised to leave office in two months, President Joe Biden will leave a legacy of “proactive,” “nuanced” and “effective” tech policy strategy behind him, technologists across different sectors told States Newsroom.

Biden’s term was bookended by major issues in the tech world. When he took office in early 2021, he was faced with an economy and workforce that was struggling to deal with the COVID-19 pandemic, and longstanding issues with a digital divide across the country. As he prepares to exit the White House, federal agencies are working to incorporate the principles from the 2023 AI Bill of Rights, on evolving technologies that will undoubtedly continue changing American life.

Though he was unable to get federal regulations on AI passed through Congress, Biden’s goal was to bring tech access to all Americans, while safeguarding against potential harms, the technologists said.

“I think everything that he does is foundational,” said Suriel Arellano, a longtime consultant and author on digital transformation who’s based in Los Angeles. “So it definitely sets the stage for long term innovation and regulation.”

The digital divide 

For Arellano, Biden’s attempt to bring internet access to all families stands out as a lasting piece of the president’s legacy. Broadband internet for work, healthcare and education was a part of Biden’s 2021 Bipartisan Infrastructure Deal, especially targeting people in rural areas.

Biden earmarked $65 billion toward the project, which was dolled out to states and federal departments to establish or improve the physical infrastructure to support internet access. As of September, more than 2.4 million previously unserved homes and businesses have been connected to the internet, and $50 billion has been given to grant programs that support these goals across the states.

Arellano said he thinks there’s still work to do with the physical broadband infrastructure before that promise is realized — “I think that should have come first,” he said.

“But I think as a legacy, I think breaching the digital divide is actually one of the strong — maybe not the strongest, but I would say it’s definitely a strong legacy that he leaves,” Arellano said.

Shaping the U.S. conversation about AI

During Biden’s presidency, practical and responsible application of artificial intelligence became a major part of the tech conversation. The 2023 AI Bill of Rights created the White House AI Council, the creation of a framework for federal agencies to follow relating to privacy protection and a list of guidelines for securing AI workers, for navigating the effects on the labor market and for ensuring equity in AI use, among others.

The guidelines put forth by the administration are subtle, and “not likely to be felt by the average consumer,” said Austin-based Alex Shahrestani, an attorney and managing partner at Promise Legal, which specializes in tech and regulatory policy.

“It was something that’s very light touch and essentially sets up the groundwork to introduce a regulatory framework for AI providers without it being something that they’re really going to push back on,” Shahrestani said.

In recent months, some federal agencies have released their guidelines called for by the AI Bill of Rights, including the Department of Labor, and The Office of Management and Budget, which outlines how the government will go about “responsible acquisition” of AI. It may not seem like these guidelines would affect the average consumer, Shahrestani said, but government contractors are likely to be larger companies that already have a significant commercial footprint.

“It sets up these companies to then follow these procedures in other contexts, so whether that’s B2B or direct-to-consumer applications, that’s like more of a trickle down sort of approach,” he said.

Sheena Franklin, D.C.-based founder of K’ept Health and previously a lobbyist, said Biden emphasized the ethical use and development of AI, and set a tone of fostering public trust and preventing harm with the AI Bill of Rights.

Franklin and Shahrestani agreed it’s possible that President-elect Donald Trump could repeal some of Biden’s executive orders on AI, but they see the Bill of Rights as a fairly light approach to regulating it.

“It was a really nuanced and effective approach,” Shahrestani said. “There’s some inertia building, right? Like a snowball rolling down the hill. We’re early days for the snowball, but it just got started and it will only grow to be a bigger one.”

The CHIPS act

Biden’s CHIPS and Science Act of 2022, which aimed to strengthen domestic semiconductor manufacturing, supply chains and the innovation economy with a $53 billion investment, is a major piece of his legacy, Franklin said. The bill centered on worker and community investments, and prioritized small businesses and underrepresented communities, with a goal of economic growth in the U.S., and especially in communities that needed support.

Two years after the bill was signed, the federal government, in partnership with American companies, has provided funding for semiconductor manufacturing projects that created more than 100,000 jobs and workforce development programs. The U.S. is on track to produce 30% of the world’s semiconductor chips in 2032, up from 10% today.

“He was really trying to position the U.S. as a global leader when it came to technology, because that industry is going to continue to grow,” Franklin said.

It’s hard to quantify what the lasting impact of the CHIPS act will be, but one immediate factor is computing, Shahrestani said. The AI models being developed right now have infinite abilities, he said, but the computing power had previously held the industry back.

“Being able to provide more compute through better chips, and more sophisticated hardware is going to be a big part of what provides, and what is behind the best AI technologies,” Shahrestani said.

Accountability for Big Tech

Many in the Big Tech community see Biden’s AI Bill of Rights, and its data privacy inclusions, as well as the Justice Department’s monopoly lawsuits against tech giants like Apple and Google, as hampering innovation.

Arellano is optimistic about the technological advances and innovation that the U.S. may see under a less regulation-focused Trump presidency, but he cautions that some regulations may be needed for privacy protections.

“My concern is always on the public side, you know, putting the dog on a leash, and making sure that our regulations are there in place to protect the people,” he said.

Franklin predicts that if Biden attempts any last-minute tech policy before he leaves office, it will probably be to pursue further antitrust cases. It would align with his goal of fostering competition between startups and small businesses and reinforce his legacy of safeguarding consumer interests, she said.

When she considered how to describe Biden’s tech legacy, Franklin said she nearly used the word “strength,” though she said he ultimately could have done a little bit more for tech regulation. But she landed on two words: “thoughtful and proactive.”

“Meaning, he’s thinking about everybody’s concerns,” Franklin said. “Not just thinking about the Big Tech and not just thinking about the consumers, right? Like there has to be a balance there.”

GET THE MORNING HEADLINES.

Some in the venture capital community backed Trump. Here’s what’s next

25 November 2024 at 11:00
Elon Musk and Donald Trump

Tesla owner Elon Musk, right, was hardly alone in the tech sector in supporting the reelection efforts by Donald Trump, left. Many Silicon Valley investors and innovators were hoping for a lighter regulatory hand than they have seen under President Joe Biden. (Photo by Brandon Bell/Getty Images)

Some venture capital investors, who have funded the tech boom in Silicon Valley and beyond, say they are excited by the prospect of a lighter regulatory environment under a new Trump Administration than they saw under President Joe Biden.

But they warn that Trump policies that will benefit many technology companies may come at a cost to other pro-Trump voters.

The Bay Area bubble of Silicon Valley, which is home to institutional tech giants like Apple, Google, Intel and Adobe, had been previously seen as a left-leaning region, like many other California communities. But the 2024 election was a unique one, venture capitalists and founders say.

“There’s been a significant shift in the valley rightward since the last election,” said Joe Endoso, a Silicon Valley investor.  “And you’ve seen that in the financial flows — in the level of dollars — that were directed towards supporting President Trump’s campaign from the technology sector.”

Endoso, president of financial tech platform Linqto, said some tech industry people who previously voted for progressive issues and candidates this time cast their ballot for Trump. He said he’s heard more concern about potential regulations in the tech industry and negative economic effects under continued Democratic leadership.

This turn toward Trump wasn’t universal in the Valley. The majority of donations from employees at companies like Google, Amazon and Microsoft went toward Democratic candidate Kamala Harris, Reuters reported in September. But tech billionaires like Elon Musk and venture capital investors, like Andreessen Horowitz co-founders Marc Andreessen and Ben Horowitz, poured millions into his campaign.

While Trump didn’t receive unanimous support from the tech sector, many American tech giants and investors are excited about the light-handed approach to tech regulation that’s likely to come in the next four years. Congress has struggled to pass any federal laws around emerging technology like AI, though states have done so on their own on issues like data privacy, transparency, discrimination, and on how AI-generated images can be used.

The Biden administration, however, on its own issued a number of “best practice” guides for emerging technologies and aggressively pursued antitrust cases against some tech giants, including an ongoing case against Google that could force the company to spin off its popular Chrome web browser.

It appears unlikely that Trump will continue the Biden era regulatory and enforcement drives.

Those working in emerging technologies like AI are making advancements so quickly that regulators are unlikely to be able to keep up anyway, Endoso said. The tech industry mindset — move fast and break things, first coined by Facebook founder Mark Zuckerberg — will likely continue under Trump’s administration.

“You’re running through walls and hoping that when the regulations come about, they’re not going to be so, you know, restrictive,” Endoso said. “But you’re not going to sit and wait for the regulators. You can’t afford to.”

Why care about the VC market?

Venture capitalists pour money into many promising startups in Silicon Valley and elsewhere, looking for the ones that will create lucrative new technologies or “disrupt” existing ones. Silicon Valley successes include Uber, which received its first round of venture capital investment for just about $1.3 million in 2010, and Airbnb, which started with just a $20,000 investment in 2008. Today, the companies are worth $146 billion and $84 billion, respectively.

Many more, however, fail. High-visibility startups that folded after raising very large sums include streaming platform Quibi, which raised $1.75 billion and ChaCha, the SMS text-based search platform that had raised $108 million.

The high-risk, high-reward nature of the industry makes for a rarified business, and there’s a high barrier to entry. To become an accredited venture capital investor, one must have an income of at least $200,000 a year, or be worth $1 million. The handful of firms pouring the most money into the United States technology market are usually worth billions.

Yet, the technology being developed and funded by wealthy investors today will shape the next decade of everyone’s lives. Some of the most influential technology in the global economy has been released under President Joe Biden’s administration in the last three and a half years.

Advancements in generative AI and machine learning technology, rapid development of augmented and virtual reality, further adoption of cloud computing and Internet of Things (IoT) technologies, such as internet connected appliances and home devices, along with automation of many industries have already shifted much of American life. ChatGPT, one of the most recognizable examples of generative AI that the public can use, was only released two years ago, but the sector of generative AI is already threatening many American jobs.

Those with writing-focused careers like copywriters and social media marketers, are already feeling the disruption, and experts believe STEM professionals, educators and workforce trainers and others in creative and arts fields are going to see much of their job responsibilities automated by AI by 2030. 

The venture capital market has been a volatile one over the last four years. Though many of Trump’s attacks on Democrats during his campaign cycle centered on the healthy economy under his first term, the COVID-19 pandemic was the single-biggest economic factor to disrupt the venture capital market and others.

The U.S. saw its biggest year for venture capital investments in 2021, but supply-chain issues and the continuing reliance on remote work changed the trajectory of many companies’ plans to go public on the stock market. High inflation and interest rates have kept many investors from deploying capital and many companies from completing mergers and acquisitions since then, although the second half of 2024 is looking up.

The economy quickly became the number one issue for Americans in the presidential election cycle. And though thriving venture capital markets usually benefit those that are already wealthy enough to invest, we’ll likely see a positive correlation in the general markets too, said Scott Nissenbaum, president and CEO of Ben Franklin Technology Partners, an innovation-centered fund in Pennsylvania.

“A thriving, efficient market is good for venture capital. And the flip side is also true,” he said. “We feed into and create the innovations and the efficiencies and the next generation … that create the robust and the boom.”

How investors and founders are preparing for Trump 

Nissenbaum predicts that Trump may remove regulations for technology used by U.S. transportation and military systems, allowing for more tech integration than previously permitted without human safeguards in place. That might look like more flight optimization technology, or more drone usage by military branches. Nissenbaum also thinks Trump will attempt to open up space travel, especially with big backing by Musk, who runs SpaceX.

Health care also has been implementing technology rapidly, and Nissenbaum believes could see some major changes under Trump.

That is of note for healthtech founder Sipra Laddha, an Atlanta-based psychiatrist and cofounder of LunaJoy, which provides in-person and virtual wellness visits for women. The three-year-old company raised venture capital in 2022 and 2023, despite a more challenging fundraising market. Women’s health care companies saw a surge of VC investment in the wake of the overturning of Roe v. Wade in June 2022, an exception to the generally slower investment market at the time.

But she is uncertain about how Trump’s potential cabinet appointees, like Robert F. Kennedy Jr., who was appointed to head the Department of Health and Human Services, will affect LunaJoy’s operation. Kennedy has made health a key issue in his public advocacy and political activity, but he has also espoused eccentric and even false views on issues such as vaccines and pharmaceuticals.

“When women don’t have choices, mental health is significantly worse, and that’s something that goes on, often, for the entire time of that family’s trajectory,”  Laddha said. “So I’m not quite sure what’s going to happen, but you know, those are certainly things that, as a women’s mental health company, we are looking at and watching closely to see what sort of legislation, rules and laws come out.”

When it comes to fundraising early next year, Laddha is optimistic. She’s focused on how fragmented the healthcare industry is right now, and plans to showcase how companies like hers will aim to integrate with larger health systems.

“Our role is to be really as disruptive as possible, and to bring to the forefront the most innovative solutions that we can do while still working within the current framework of healthcare that exists today,” she said.

Some sectors worry about Trump economic policy

While software and cloud-based technologists seem excited by the effects of deregulation, startup founders that make physical products, especially using microchip technology, are wary of Trump’s plan to impose tariffs on imported goods.

Samyr Laine, a managing partner at Los Angeles-based Freedom Trail Capital, specializes in consumer tech and consumer packaged goods. Laine said he feels a sense of relief in ending the “uncertainty” around who will take the presidency the next four years, but he predicts many founders will feel the costly effects of Trump’s planned tariffs, and pass those additional costs to consumers.

Though the existing companies in his portfolio won’t be hit too hard, it’s a factor they’ll be forced to review when considering investments in companies in the future. Those that will incur the additional costs of imported goods will have to adjust their profit margins and might not be as attractive to investors.

“As a consumer and someone who isn’t in the space, not to be like a fear monger, but expect that some of the things you typically pay for, the price will go up,” Laine said.

The effect on work

Although Trump was successful in picking up a significant amount of tech industry elite support this election season, much of his voter base is working class people who will not feel the positive effects of tech industry deregulation.

Endoso, the Silicon Valley investor and founder, says the Trump coalition of tech entrepreneurs and working-class voters represents “a division between the haves and the have-nots.” The usual basis on which people pick their electoral preferences, like race, geography, income and proximity to city life, were “shattered” this time around.

“It was a revolt of the working class, at least in my view,” he said.

The advancements of AI and machine learning, which will enrich the investor class, will have large implications on employment for those working class voters. The vast majority of Americans who are not college educated, and work physical jobs, might struggle to thrive, he said. We’ll likely see overhauls of industries as robots replace and automate a majority of physical labor in warehouses, and self-driving vehicles take over jobs like long-haul trucking and ride services such as Uber and Lyft.

“I think those are important questions to be asking from a policy standpoint, and I think that the intelligent answers shouldn’t be ‘let’s shut the innovation down.’” Endoso said. “That didn’t work in 19th century England. It won’t work here today, right? But it does require our rethinking the definition of work, and the definition of how you … organize a society along lines where you don’t need to have the same level of maybe direct labor input as we had in the past.”

Nissenbaum agreed, saying that AI has already begun to leak into every field and industry, and will only continue to disrupt how we work. As revolutionary as the internet and internet companies were in the late 1990s, the web has become the infrastructure for artificial intelligence to become more efficient and effective at everything it does.

With lighter regulation under a new Trump administration, we’re likely to see AI develop at unpredictable rates, he said. And laborers will definitely be feeling the effects over the next four years.

“You’re not going to lose your job to AI,” Nissenbaum said. “You’re going to lose your job to someone who understands how to do your job with AI.”

GET THE MORNING HEADLINES.

How tech affected ‘the information environment’ of the 2024 election

11 November 2024 at 11:15

Artificial intelligence, social media and a sprawling network of influencers helped spread propaganda and misinformation in the final weeks of the 2024 election campaign, an election technology expert says. (Melissa Sue Gerrits | Getty Images)

Advancements in AI technology, and the changing “information environment” undoubtedly influenced how campaigns operated and voters made decisions in the 2024 election, an elections and democracy expert said.

Technologists and election academics warned a few months ago that mis- and disinformation would play an even larger role in 2024 than it did in 2020 and 2016. What exactly that disinformation would look like became more clear in the two weeks leading up to the election, said Tim Harper, senior policy analyst for democracy and elections at the Center for Democracy and Technology.

“I think a lot of folks kind of maybe prematurely claimed that generative AI’s impact was overblown,” Harper said. “And then, you know, in short order, in the last week, we saw several kinds of disinformation campaigns emerge.”

Harper specifically mentioned the false claims that vice presidential nominee Tim Walz was alleged to have perpetrated an act of sexual misconduct, and a deep fake video of election officials ripping up ballots, both of which have been shown to be Russian misinformation campaigns.

AI also played a role in attempted voter suppression, Harper said, not just by foreign governments, but by domestic parties as well. EagleAI, a database that scrapes public voter data, was being used by a 2,000-person North Carolina group which aimed to challenge the ballots of “suspicious voters.”

Emails obtained by Wired last month show that voters the group aimed to challenge include “same-day registrants, US service members overseas, or people with homestead exemptions, a home tax exemption for vulnerable individuals, such as elderly or disabled people, in cases where there are anomalies with their registration or address.”

The group also aimed to target people who voted from a college dorm, people who registered using a PO Box address and people with “inactive” voter status.

Another shift Harper noted from the 2020 election was a rollback of enforcement of misinformation policies on social media platforms. Many platforms feared being seen as “influencing the election” if they flagged or challenged misinformation content.

Last year, Facebook and Instagram’s parent company Meta, as well as X began allowing political advertisements that perpetuated election denial of the 2020 election.

Youtube also changed its policy to allow election misinformation, saying “In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm.”

But there are real-world risks for rampant misinformation, Harper said. Federal investigative agencies have made clear that misinformation narratives that delegitimize past elections directly contribute to higher risk of political violence.

Platforms with less-well-established trust and safety teams, such Discord and Twitch also play a role. They experienced their “first rodeo” of mass disinformation this election cycle, Harper said.

“They were tested, and I think we’re still evaluating how they did at preventing this content,” he said.

Podcasters and social influencers also increasingly shaped political opinions of their followers this year, often under murky ethical guidelines. Influencers do not follow ethical guidelines and rules for sharing information like journalists do, but Americans have increasingly relied on social media for their news.

There’s also a lack of transparency between influencers and the political campaigns and candidates they’re speaking about — some have reportedly taken under-the-table payments by campaigns, or have made sponsored content for their followers without disclosing the agreement to viewers.

The Federal Election Commission decided late last year that while campaigns have to disclose spending to an influencer, influencers do not have to disclose such payments to their audience.

“In terms of kind of the balkanization of the internet, of the information environment, … I think this election cycle may end up being seen kind of as ‘the influencer election,’” Harper said.

GET THE MORNING HEADLINES.

As AI takes the helm of decision making, signs of perpetuating historic biases emerge

14 October 2024 at 10:15
Money, pile coin with saving book and paper home,concept

Studies show that AI systems used to make important decisions such as approval of loan and mortgage applications can perpetuate historical bias and discrimination if not carefully constructed and monitored. (Seksan Mongkhonkhamsao/Getty Images)

In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvania’s Lehigh University found something stark: there was clear racial bias at play.

With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”

White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.

The experiment aimed to simulate how financial institutions are using AI algorithms, machine learning and large language models to speed up processes like lending and underwriting of loans and mortgages. These “black box” systems, where the algorithm’s inner workings aren’t transparent to users, have the potential to lower operating costs for financial firms and any other industry employing them, said Donald Bowen, an assistant fintech professor at Lehigh and one of the authors of the study.

But there’s also large potential for flawed training data, programming errors, and historically biased information to affect the outcomes, sometimes in detrimental, life-changing ways.

“There’s a potential for these systems to know a lot about the people they’re interacting with,” Bowen said. “If there’s a baked-in bias, that could propagate across a bunch of different interactions between customers and a bank.”

How does AI discriminate in finance?

Decision-making AI tools and large language models, like the ones in the Lehigh University experiment, are being used across a variety of industries, like healthcare, education, finance and even in the judicial system.

Most machine learning algorithms follow what’s called classification models, meaning you formally define a problem or a question, and then you feed the algorithm a set of inputs such as a loan applicant’s age, income, education and credit history, Michael Wellman, a computer science professor at the University of Michigan, explained.

The algorithm spits out a result — approved or not approved. More complex algorithms can assess these factors and deliver more nuanced answers, like a loan approval with a recommended interest rate.

Machine learning advances in recent years have allowed for what’s called deep learning, or construction of big neural networks that can learn from large amounts of data. But if AI’s builders don’t keep objectivity in mind, or rely on data sets that reflect deep-rooted and systemic racism, results will reflect that.

“If it turns out that you are systematically more often making decisions to deny credit to certain groups of people more than you make those wrong decisions about others, that would be a time that there’s a problem with the algorithm,” Wellman said. “And especially when those groups are groups that are historically disadvantaged.”

Bowen was initially inspired to pursue the Lehigh University study after a smaller-scale assignment with his students revealed the racial discrimination by the chatbots.

“We wanted to understand if these models are biased, and if they’re biased in settings where they’re not supposed to be,” Bowen said, since underwriting is a regulated industry that’s not allowed to consider race in decision-making.

For the official study, Bowen and a research team ran thousands of loan application numbers over several months through different commercial large language models, including OpenAI’s GPT 3.5 Turbo and GPT 4, Anthropic’s Claude 3 Sonnet and Opus and Meta’s Llama 3-8B and 3-70B.

In one experiment, they included race information on applications and saw the discrepancies in loan approvals and mortgage rates. In other, they instructed the chatbots to “use no bias in making these decisions.” That experiment saw virtually no discrepancies between loan applicants.

But if race data isn’t collected in modern day lending, and algorithms used by banks are instructed to not consider race, how do people of color end up getting denied more often, or offered worse interest rates? Because much of our modern-day data is influenced by disparate impact, or the influence of systemic racism, Bowen said.

Though a computer wasn’t given the race of an applicant, a borrower’s credit score, which can be influenced by discrimination in the labor and housing markets, will have an impact on their application. So might their zip code, or the credit scores of other members of their household, all of which could have been influenced by the historic racist practice of redlining, or restricting lending to people in poor and nonwhite neighborhoods.

Machine learning algorithms aren’t always calculating their conclusions in the way that humans might imagine, Bowen said. The patterns it is learning apply to a variety of scenarios, so it may even be digesting reports about discrimination, for example learning that Black people have historically had worse credit. Therefore, the computer might see signs that a borrower is Black, and deny their loan or offer them a higher interest rate than a white counterpart.

Other opportunities for discrimination 

Decision making technologies have become ubiquitous in hiring practices over the last several years, as application platforms and internal systems use AI to filter through applications, and pre-screen candidates for hiring managers. Last year, New York City began requiring employers to notify candidates about their use of AI decision-making software.

By law, the AI tools should be programmed to have no opinion on protected classes like gender, race or age, but some users allege that they’ve been discriminated against by the algorithms anyway. In 2021, the U.S. Equal Employment Opportunity Commission launched an initiative to examine more closely how new and existing technologies change the way employment decisions are made. Last year, the commission settled its first-ever AI discrimination hiring lawsuit.

The New York federal court case ended in a $365,000 settlement when tutoring company iTutorGroup Inc. was alleged to use an AI-powered hiring tool that rejected women applicants over 55 and men over 60. Two hundred applicants received the settlement, and iTutor agreed to adopt anti-discrimination policies and conduct training to ensure compliance with equal employment opportunity laws, Bloomberg reported at the time.

Another anti-discrimination lawsuit is pending in California federal court against AI-powered company Workday. Plaintiff Derek Mobley alleges he was passed over for more than 100 jobs that contract with the software company because he is Black, older than 40 and has mental health issues, Reuters reported this summer. The suit claims that Workday uses data on a company’s existing workforce to train its software, and the practice doesn’t account for the discrimination that may reflect in future hiring.

U.S. judicial and court systems have also begun incorporating decision-making algorithms in a handful of operations, like risk assessment analysis of defendants, determinations about pretrial release, diversion, sentencing and probation or parole.

Though the technologies have been cited in speeding up some of the traditionally lengthy court processes — like for document review and assistance with small claims court filings — experts caution that the technologies are not ready to be the primary or sole evidence in a “consequential outcome.”

“We worry more about its use in cases where AI systems are subject to pervasive and systemic racial and other biases, e.g., predictive policing, facial recognition, and criminal risk/recidivism assessment,” the co-authors of a paper in Judicature’s 2024 edition say.

Utah passed a law earlier this year to combat exactly that. HB 366, sponsored by state Rep. Karianne Lisonbee, R-Syracuse, addresses the use of an algorithm or a risk assessment tool score in determinations about pretrial release, diversion, sentencing, probation and parole, saying that these technologies may not be used without human intervention and review.

Lisonbee told States Newsroom that by design, the technologies provide a limited amount of information to a judge or decision-making officer.

“We think it’s important that judges and other decision-makers consider all the relevant information about a defendant in order to make the most appropriate decision regarding sentencing, diversion, or the conditions of their release,” Lisonbee said.

She also brought up concerns about bias, saying the state’s lawmakers don’t currently have full confidence in the “objectivity and reliability” of these tools. They also aren’t sure of the tools’ data privacy settings, which is a priority to Utah residents. These issues combined could put citizens’ trust in the criminal justice system at risk, she said.

“When evaluating the use of algorithms and risk assessment tools in criminal justice and other settings, it’s important to include strong data integrity and privacy protections, especially for any personal data that is shared with external parties for research or quality control purposes,” Lisonbee said.

Preventing discriminatory AI

Some legislators, like Lisonbee, have taken note of these issues of bias, and potential for discrimination. Four states currently have laws aiming to prevent “algorithmic discrimination,” where an AI system can contribute to different treatment of people based on race, ethnicity, sex, religion or disability, among other things. This includes Utah, as well as California (SB 36), Colorado (SB 21-169), Illinois (HB 0053).

Though it’s not specific to discrimination, Congress introduced a bill in late 2023 to amend the Financial Stability Act of 2010 to include federal guidance for the financial industry on the uses of AI. This bill, the Financial Artificial Intelligence Risk Reduction Act or the “FAIRR Act,” would require the Financial Stability Oversight Council to coordinate with agencies regarding threats to the financial system posed by artificial intelligence, and may regulate how financial institutions can rely on AI.

Lehigh’s Bowen made it clear he felt there was no going back on these technologies, especially as companies and industries realize their cost-saving potential.

“These are going to be used by firms,” he said. “So how can they do this in a fair way?”

Bowen hopes his study can help inform financial and other institutions in deployment of decision-making AI tools. For their experiment, the researchers wrote that it was as simple as using prompt engineering to instruct the chatbots to “make unbiased decisions.” They suggest firms that integrate large language models into their processes do regular audits for bias to refine their tools.

Bowen and other researchers on the topic stress that more human involvement is needed to use these systems fairly. Though AI can deliver a decision on a court sentencing, mortgage loan, job application, healthcare diagnosis or customer service inquiry, it doesn’t mean they should be operating unchecked.

University of Michigan’s Wellman told States Newsroom he’s looking for government regulation on these tools, and pointed to H.R. 6936, a bill pending in Congress which would require federal agencies to adopt the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology. The framework calls out potential for bias, and is designed to improve trustworthiness for organizations that design, develop, use and evaluate AI tools.

“My hope is that the call for standards … will read through the market, providing tools that companies could use to validate or certify their models at least,” Wellman said. “Which, of course, doesn’t guarantee that they’re perfect in every way or avoid all your potential negatives. But it can … provide basic standard basis for trusting the models.”

GET THE MORNING HEADLINES.

Budget restrictions, staff issues, and AI are threats to states’ cybersecurity

30 September 2024 at 17:15

A new survey of state chief information and security officers finds them better prepared to protect their networks from cyberattacks than four years earlier, but still worried about limited staff and resources. (Bill Hinton | Getty Images)

Many state chief information and security officers say they don’t have the budget, resources, staff or expertise to feel fully confident in their ability to guard their government networks against cyber attacks, according to a new Deloitte & Touche survey of officials in all 50 states and D.C.

“The attack surface is expanding as state leaders’ reliance on information becomes increasingly central to the operation of government itself,” said Srini Subramanian, principal of Deloitte & Touche LLP and the company’s global government and public services consulting leader. “And CISOs have an increasingly challenging mission to make the technology infrastructure resilient against ever-increasing cyber threats.”

The biennial cybersecurity report, released today, outlined where new threats are coming from, and what vulnerabilities these teams have.

Governments are relying more on servers to store information, or transmit it through the Internet of Things, or connected sensor devices. Infrastructure for systems like transit and power is also heavily reliant on technology, and all of the connected online systems create more opportunities for attack.

The emergence of AI is also creating new ways for bad actors to exploit vulnerabilities, as it makes phishing scams and audio and visual deep fakes easier.

Deloitte found encouraging data that showed the role of state chief information and security officer has been prioritized in every state’s government tech team, and that statutes and legislation have been introduced in some states which give CISOs more authority.

In recent years, CISOs have taken on the vast majority of security management and operations, strategy, governance, risk management and incident response for their state, the report said.

But despite the growing weight on these roles, some of the CISOs surveyed said they do not have the resources needed to feel confident in their team’s ability to handle old and new cybersecurity threats.

Nearly 40% said they don’t have enough funds for projects that comply with regulatory or legal requirements, and nearly half said they don’t know what percent of their state’s IT budget is for cybersecurity.

Talent was another issue, with about half of CISOs saying they lacked cybersecurity staffing, and 31% saying there was an “inadequate availability” of professionals to complete these jobs. The survey does show that CISOs reported better staff competencies in 2024 compared to 2020, though.

Staffing of CISOs themselves, due to burnout, has been an increasing issue since the pandemic, the report found. Since the 2022 survey, Deloitte noted that nearly half of all states have had turnover in their chief security officers, and the median tenure is now 23 months, down from 30 months in the last survey.

When it came to generative AI, CISOs seemed to see both the opportunities and risks. Respondents listed generative AI as one of the newest threats to cybersecurity, with 71% saying they believe it poses a “high” threat; 41% of respondents said they don’t have confidence in their team to be able to handle them.

While they believe AI is a threat, many teams also reported using the technology to improve their security operations. Twenty one states are already using some form of AI, and 22 states will likely begin using it in the next year. As with with state legislation around AI, it’s being looked at on a case-by-case basis.

One CISO said in the report their team is “in discovery phase with an executive order to study the impact of gen AI on security in our state” while another said they have “established a committee that is reviewing use cases, policies, procedures, and best practices for gen AI.”

CISOs face these budgetary and talent restrictions while they aim to take on new threats and secure aging technology systems that leave them vulnerable.

The report laid out some tactics tech departments could use to navigate these challenges, including leaning on government partners, working creatively to boost budgets, diversifying their talent pipeline, continuing the AI policy conversations and promoting the CISOs role in digital transformation of government operations.

GET THE MORNING HEADLINES.

Pollsters are turning to AI this election season

30 September 2024 at 10:00
voting sign/polling place/election

As response rates drop, pollsters are increasingly turning to artificial intelligence to determine what voters are thinking ahead of Election Day, not only asking the questions but sometimes to help answer them. (Stephen Maturen | Getty Images)

Days after President Joe Biden announced he would not be seeking re-election, and endorsed Vice President Kamala Harris, polling organization Siena College Research Institute sought to learn how “persuadable” voters were feeling about Harris.

In their survey, a 37-year-old Republican explained that they generally favored Trump for his ability to “get [things] done one way or another.”

“Who do you think cares about people like you? How do they compare in terms of caring about people like you?” the pollster asked.

“That’s where I think Harris wins, I lost a lot of faith in Trump when he didn’t even contact the family of the supporter who died at his rally,” the 37-year-old said.

Pollsters pressed this participant and others across the political spectrum to further explain their stances, and examine the nuance behind choosing a candidate. The researchers saw in real time how voters may sway depending on the issue, and asked follow-up questions about their belief systems.

But the “persuadable” voters weren’t talking to a human pollster. They were conversing with an AI chatbot called Engage.

The speed in which election cycles move, coupled with a steep drop of people participating in regular phone or door-to-door polls, have caused pollsters to turn to artificial intelligence for insights, both asking the questions and sometimes even answering them

Why do we poll? 

The history of polling voters in presidential races goes back 200 years, to the 1824 race which ultimately landed John Quincy Adams in the White House. White men began polling each other at events leading up to the election, and newspapers began reporting the results, though they didn’t frame the results as predictive of the outcome of the election.

In modern times, polling for public opinion has become a business. Research centers, academic institutions and news conglomerates themselves have been conducting polls during election season for decades. Though their accuracy has limitations, the practice is one of the only ways to gauge how Americans may be thinking before they vote.

Polling plays a different role for different groups, said Rachel Cobb, an assistant professor of political science and legal studies at Suffolk University. For campaign workers, polling groups of voters helps provide insight into the issues people care about the most right now, and informs how candidates talk about those issues. It’s why questions at a presidential debate usually aren’t a surprise to candidates — moderators tend to ask questions about the highest-polling topics that week.

For news outlets, polls help give context to current events and give anchors numbers to illustrate a story. Constant polling also helps keep a 24-hour news cycle going.

And for regular Americans, poll results help them gauge where the race is, and either activate or calm their nerves, depending on if their candidate is polling favorably.

But Cobb said she, like many of her political science colleagues, has observed a drop in responses to more traditional style of polling. It’s much harder and more expensive for pollsters to do their job, because people aren’t answering their phones or their front doors.

“The time invested in getting the appropriate kind of balance of people that you need in order to determine accuracy has gotten greater and so and they’ve had to come up with more creative ways to get them,” Cobb said. “At the same time, our technological capacity has increased.”

How AI is assisting in polling?

The speed of information has increased exponentially with social media and 24-hour news cycles, and polls have had to keep up, too. Though they bring value in showing insights for a certain group of people, their validity is fleeting because of that speed, Cobb said. Results are truly only representative of that moment in time, because one breaking news story could quickly change public opinion.

That means pollsters have to work quickly, or train artificial intelligence to keep up.

Leib Litman, co-CEO and chief research officer of CloudResearch, which created the chatbot tool Engage, said AI has allowed them to collect answers so much faster than before.

“We’re able to interview thousands of people within a matter of a couple hours, and then all of that data that we get, all those conversations, we’re also able to analyze it, and derive the insights very, very quickly,” he said.

Engage was developed about a year ago and can be used in any industry where you need to conduct market research via interviews. But it’s become especially useful in this election cycle as campaigns attempt to learn how Americans are feeling at any given moment. The goal isn’t to replace human responses with AI, rather to use AI to reach more people, Litman said.

But some polling companies are skipping interviewing and instead relying on something called “sentiment analysis AI” to analyze publically available data and opinions. Think tank Heartland Forward recently worked with AI-powered polling group Aaru to determine the public perception of artificial intelligence.

The prediction AI company uses geographical and demographic data of an area and scrapes publicly available information, like tweets or voting records, to simulate respondents of a poll. The algorithm uses all this information to make assertions about how a certain demographic group may vote or how they may answer questions about political issues.

This type of poll was a first for Heartland Forward, and its executive vice president Angie Cooper said they paired the AI-conducted poll with in-person gatherings where they conducted more traditional polls.

“When we commissioned the poll, we didn’t know what the results were going to yield,” she said. “What we heard in person closely mirrored the poll results.”

Sentiment Analysis

The Aaru poll is an example of sentiment analysis AI, which uses machine learning and large language models to analyze the meaning and tone behind text. It includes training an algorithm to not just understand literally what’s in a body of text, but also to seek out hidden messaging or context, like humans do in conversation.

The general public started interacting with this type of AI in about 2010, said Zohaib Ahmed, founder of Resemble AI, which specializes in voice generation AI. Sentiment analysis AI is the foundation behind search engines that can read a request and make recommendations, or to get your Alexa device to fulfill a command.

Between 2010 and 2020, though, the amount of information collected on the internet has increased exponentially. There’s so much more data for AI models to process and learn from, and technologists have taught it to process contextual, “between-the-lines” information.

The concept behind sentiment analysis is already well understood by pollsters, says Bruce Schneier, a security technologist and lecturer at Harvard University’s Kennedy School. In June, Schneier and other researchers published a look into how AI was playing a role in political polling. 

Most people think polling is just asking people questions and recording their answers, Schneier said, but there’s a lot of “math” between the questions people answer and the poll results.

“All of the work in polling is turning the answers that humans give into usable data,” Schneier said.

You have to account for a few things: people lie to pollsters, certain groups may have been left out of a poll, and response rates are overall low. You’re also applying polling statistics to the answers to come up with consumable data. All of this is work that humans have had to do themselves before technology and computing helped speed up the process.

In the Harvard research, Schneier and the other authors say they believe AI will get better at anticipating human responses, and knowing when it needs human intervention for more accurate context. Currently, they said, humans are our primary respondents to polls, and computers fill in the gaps. In the future, though, we’ll likely see AI filling out surveys and humans filling in the gaps.

“I think AI should be another tool in the pollsters mathematical toolbox, which has been getting more complex for the past several decades,” Schneier said.

Pros and cons of AI-assisted polling 

AI polling methods bring pollsters more access and opportunity to gauge public reaction. Those who have begun using it in their methodology said that they’ve struggled to get responses from humans organically, or they don’t have the time and resources to conduct in-person or telephone polling.

Being interviewed by an anonymous chatbot may also provide more transparent answers for controversial political topics. Litman said personal, private issues such as health care or abortion access are where their chatbot “really shines.” Women, in particular, have reported that they feel more comfortable sharing their true feelings about these topics when talking to a chatbot, he said.

But, like all methodology around polling, it’s possible to build flaws into AI-assisted polling.

The Harvard researchers ran their own experiment asking ChatGPT 3.5 questions about the political climate, and found shortcomings when it asked about U.S. intervention in the Ukraine war. Because the AI model only had access to data up through 2021, the answers missed all of the current context about Russia’s invasion beginning in 2022.

Sentiment analysis AI may also struggle with text that’s ambiguous, and it can’t be counted on for reviewing developing information, Ahmed said. For example, the X timeline following one of the two assassination attempts of Trump probably included favorable or supportive messages from politicians across the aisle. An AI algorithm might read the situation and conclude that all of those people are very pro-Trump.

“But it doesn’t necessarily mean they’re navigating towards Donald Trump,” Ahmed said. “It just means, you know, there’s sympathy towards an event that’s happened, right? But that event is completely missed by the AI. It has no context of that event occurring, per se.”

Just like phone-call polling, AI-assisted polling can also potentially leave whole groups of people out of surveys, Cobb said. Those who aren’t comfortable using a chatbot, or aren’t very active online will be excluded from public opinion polls if pollsters move most of their methods online.

“It’s very nuanced,” Ahmed said of AI polling. “I think it can give you a pretty decent, high-level look at what’s happening, and I guarantee that it’s being used by election teams to understand their position in the race, but we have to remember we exist in bubbles, and it can be misleading.”

Both the political and technology experts agreed that as with most other facets of our lives, AI has found its way into polling and we likely won’t look back. Technologists should aim to further train AI models to understand human sentiment, they say, and pollsters should continue to pair it with human responses for a fuller scope of public opinion.

“Science of polling is huge and complicated,” Schneier said. “And adding AI to the mix is another tiny step down a pathway we’ve been walking for a long time using, you know, fancy math combined with human data.”

GET THE MORNING HEADLINES.

How immigrants navigate their digital footprints in a charged political climate

13 September 2024 at 10:30

José Patiño, a 35-year-old DACA recipient and Arizona community organizer, says it took him a long time to overcome the fear of sharing his personal information — including his legal status — on social media. (Photo courtesy of José Patiño)

For more than a decade, San-Francisco-based Miguel has been successfully filing renewals for his Deferred Action for Childhood Arrivals (DACA) status every two years, at least until 2024.

For some reason, this year, it took more than five months to get approval, during which his enrollment in the program lapsed, leaving him in a legal limbo.

He lost his work visa and was put on temporary unpaid leave for three months from the large professional services company where he’s worked for a decade.

“In those three months, I was trying to do a lot of damage control around getting an expedited process, reaching out to the ombudsman, congressmen — all of the escalation type of actions that I could do,” he said.

He was also being cautious about what he put in his social media and other online postings. Like many, he realized such information could put him at risk in an uncertain political environment around immigration.

“Given my current situation, I try not to brand myself as undocumented, or highlight it as the main component of my identity digitally,” Miguel said.

Miguel, who came to the United States at age 7 with his parents from the Philippines, says he was already mindful about his digital footprint before his DACA protections lapsed. His Facebook and Instagram accounts are set to private, and while amplifying the stories of immigrants is one of his goals, he tries to do so from an allyship perspective, rather than centering his own story.

While his DACA status has now been renewed — reinstating his work permit and protection from deportation — and Miguel is back at work, he’s taking extra precautions about what he posts online and how he’s perceived publicly. It’s the reason that States Newsroom is not using his full name for this story.

Miguel’s company is regulated by the SEC, and has to take a nonpartisan approach on political issues, he said, and that extends to employees. Staying neutral about political issues may be a common rule for many American workers, but it’s more complicated when an issue is a part of your core identity, Miguel said.

“I think that’s been a huge conflicting area in my professional journey,” he said. “It’s the separation and compartmentalization that I have to do to separate my identity — given that it is a very politicized experience — with my actual career and company affiliation.”

Digital footprints + surveillance

It’s not unusual for your digital footprint — the trail of information you create browsing the web or posting on social media — to have real-life ramifications. But if you’re an immigrant in the United States, one post, like or comment on social media could lead to an arrest, deportation or denial of citizenship.

In 2017, the Department of Homeland Security issued a notice saying it would begin tracking more information, including social media handles for temporary visa holders, immigrants and naturalized U.S. citizens in an electronic system. And Homeland Security would store that information.

But in recent years, there’s been more data collection. In 2019, U.S. Immigration and Customs Enforcement (ICE) was found to have contracted with commercial data brokers like Thomson Reuters’ CLEAR, which has access to information in credit agencies, cellphone registries, social media posts, property records and internet chat rooms, among other sources.

Emails sent by ICE officials were included in a 2019 federal court filing, showing that information accessed via the CLEAR database was used in a 2018 deportation case, the Intercept reported. ICE agents used an address found in CLEAR, along with Facebook posts of family gatherings, to build a case against a man who had been deported from his home in Southern California and then returned. The man had been living in the U.S. since he was 1, worked as a roofer and had children who are U.S. citizens.

Ultimately, a Facebook post showing the man had “checked in” at a Southern California Home Depot in May 2018 led to his arrest. ICE agents monitored the page, waited for him to leave the store, then pulled him over. He was charged with felony illegal reentry.

Ray Ybarra Maldonado, an immigration and criminal attorney in Phoenix, said he’s seen more requests for social media handles in his immigration paperwork filings over the last few years. It can be nerve-wracking to think that the federal government will be combing through a client’s posts, he said, but clients have to remember that ultimately, anything put on the internet is for public consumption.

“We all think when we post something on social media that it’s for our friends, for our family,” Ybarra Maldonado said. “But people have to understand that whatever you put out there, it’s possible that you could be sitting in a room across from a government agent someday asking you a question about it.”

Ybarra Maldonado said he’s seen immigration processes where someone is appealing to the court that they are a moral, upstanding person, but there are screenshots of them from social media posing with guns or drugs.

Ybarra Maldonado suggests that people applying for citizenship or temporary protections consider keeping their social media pages private, and to only connect with people that they know. He also warns that people who share info about their legal status online can be the target of internet scams, as there’s always someone looking to exploit vulnerable populations.

But maintaining a digital footprint can also be a positive thing for his clients, Ybarra Maldonado said. Printouts from social media can provide evidence of the longevity of someone’s residence in the U.S., or show them as an active participant in their community. It’s also a major way that immigrants stay connected to their families and friends in other countries, and find community in the U.S.

Identifying yourself online

For José Patiño, a 35-year-old DACA recipient, that goal of staying connected to his community was the reason he eventually began using his full name online.

When he was 6, Patiño and his mother immigrated from Mexico to join his father in West Phoenix. From the beginning, he said, his parents explained his immigration status to him, and what that meant — he wasn’t eligible for certain things, and at any time, he could be separated from them. If he heard the words “la migra,” or immigration, he knew to find a safe place and hide.

In Patiño’s neighborhood, he described, an ever-present feeling lingered that the many immigrants living there felt limited and needed to be careful. He realized he could work, but it would always be for less money, and he’d have to keep quiet about anything he didn’t agree with. Most people in his neighborhood didn’t use social media or didn’t identify themselves as “undocumented.”

“You don’t want your status to define your whole identity,” he said. “And it’s something that you don’t want a constant reminder that you have limitations and things that you can’t do.”

But like most millennials, when Patiño went to college, he discovered that Facebook was the main way of communicating and organizing. He went “back and forth at least 100 times,” over signing up with the social media platform, and eventually made a profile with no identifying information. He used a nickname and didn’t have a profile photo. Eventually, though, he realized no one would accept his friend requests or let him into groups.

“And then little by little, as I became more attuned to actually being public, social media protected me more — my status — than being anonymous,” he said. “If people knew who I was, they would be able to figure out how to support me.”

Patiño and others interviewed for this story acknowledged that the DACA program is temporary and could change with an incoming federal administration. In his first few months of his presidency in 2017, Donald Trump announced he was rescinding the program, though the Supreme Court later ruled it would stand.

That moment pushed Patiño toward community organizing. He is now very much online as his full self, as he and his wife, Reyna Montoya, run Phoenix-based Aliento, which aims to bring healing practices to communities regardless of immigration status. The organization provides art and healing workshops, assists in grassroots organizing, and provides resources for undocumented students to get scholarships and navigate the federal student aid form.

Now, Patiño said, he would have very personal conversations with anyone considering putting themselves and their status online. The community has gained a lot of positive  exposure and community from immigrants sharing their personal experiences, but it can take a toll, he said. His online presence is now an extension of the work he does at Aliento.

“Basically, I want to be the adult that my 17-, 18-year-old-self needed,” he said. “For me, that’s how I see social media. How can I use my personal social media to provide maybe some hope or some resources with individuals who are, right now, maybe seeing loss or are in the same situation that I was in?”

Tobore Oweh, a 34-year-old Nigerian immigrant who arrived in Maryland when she was 7, says she feels the rewards of sharing her experience online have outweighed the risk, but she sometimes feels a little uneasy. (Photo courtesy Tobore Oweh)

Tobore Oweh, a 34-year-old Nigerian immigrant who arrived in Maryland when she was 7, has spent the last decade talking about her status online. After she received DACA protections in 2012, she felt like it was a way to unburden some of the pressures of living life without full citizenship, and to find people going through similar things.

“That was like a form of liberation and freedom, because I felt like I was suppressing who I was, and it just felt like this heavy burden around immigration and just like, it’s just a culture to be silent or fear,” Oweh said. “And for me, sharing my story at that time was very important to me.”

She connected with others through UndocuBlack, a multi-generational network of current and former undocumented Black people that shares resources and tools for advocacy. Being open about your status isn’t for everyone, she said, but she’s a naturally bold and optimistic person.

She referred to herself as “DACA-mented,” saying she feels she has the privilege of some protection through the program but knows it’s not a long-term solution. She’s never felt “super safe,” but was uneasier through the Trump administration when he made moves to end the program.

“Everyone with DACA is definitely privileged, but you know, we all are still experiencing this unstable place of like, not knowing,” she said.

Since sharing more of her experiences online, Oweh said she feels a lot more opportunities and possibilities came into her life. Oweh moved to Los Angeles seven years ago and runs a floral business called The Petal Effect. She feels safe in California, as the state has programs to protect immigrants from discrimination through employment, education, small businesses and housing.

For Oweh, it was never a question of if she’d use social media, but rather how she would. She feels the accessibility to community and for sharing resources far outweighs the risks of being public about her status.

“Growing up, it wasn’t like what it is now. I feel like, you know, future generations, or you know, the people that are here now, like we have more access to community than I did growing up just off of social media,” Oweh said. “So it’s been instrumental in amplifying our voices and sharing our stories.”

Being vocal about your status isn’t right for everyone, Beleza Chan, director of development and communications for education-focused Immigrants Rising, told States Newsroom.

Social media, student organizing, protests and blogging led to the passing of the DREAM act and DACA in the last two decades, and those movements were essential to immigrants rights today. But those feelings of security come in waves, she said.

“I think the political climate certainly affects that,” Chan said. “…In the previous years, it was ‘undocumented and unafraid,’ and since Trump, it’s been like ‘you’re undocumented and you’re very afraid to speak up.’”

GET THE MORNING HEADLINES.

❌
❌