Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A series of polls asked voters not who they will vote for, but who they think will win

30 October 2024 at 10:00
Harris and Trump

The track record of “citizen forecasting” of U.S. presidential election results is sort of startlingly good. (Photos: Jim Vondruska/Getty Images; Win McNamee/Getty Images)

Are you still scouring the internet for new polls and routinely checking polling averages hoping for fresh reassurances but finding precious few?

Are you poring over daily turnout reports from the Nevada Secretary of State’s office – and scanning news on turnout in other battleground states too – hoping scattered gobbets of inconclusive information will alleviate your angst, even though it is just as likely to aggravate it?

Maybe you should stop doing those things.

Alas, if you’ve read this far, you might be one of those souls – the highly engaged voter – for whom polling and turnout data at this point in an election cycle are like an automobile accident or a burning building: looking away is hard.

Sorry.

But there is a thing which, while it can’t rid you of your anxiety and fear and Sturm und Drang, might at least add a different perspective to it.

Yes, of course it’s another poll.

Or more specifically a series of polls.

Sabato’s Crystal Ball, a long-time highly regarded political handicapper connected with the Center for Politics at the University of Virginia, on Wednesday summarized a series of polls conducted during this election cycle that asked respondents not who they will vote for in the presidential election, but who they think will win.

Why? Let’s let the Crystal Ball gazers explain:

“A growing body of evidence indicates that ‘citizen forecasting’ (CF)…makes for more accurate predictions of the winner. Indeed, studies of CF in the United States and the United Kingdom, as well as work on other democracies (such as Canada, France, or Germany) have demonstrated that voter expectations outperform voter intentions in terms of predictive accuracy.”

In other words, “wisdom of crowds” is a thing that’s a thing.

The Crystal Ball’s first survey asking respondents who they thought would win the 2024 presidential election was conducted way back in April, 2023, when Ron DeSantis still looked like a going concern, and when a lot of people hoped Biden wouldn’t run after all (he officially announced his reelection bid near the end of that month).

In the April 2023 polling, 52% of respondents said they thought the Republican candidate would win the presidential election, and 48% said the Democratic candidate would.

The second round of polling wasn’t taken until a year later, in April 2024. By that time, poor DeSantis had been vanquished, Nikki Haley had distinguished her resume by finishing second to “none of these candidates” in the Nevada Republican primary, and the main thing Democrats were saying to each other was “whoa, Biden’s super old but we are stuck with him and we are doomed,” or words to that effect.

Everybody, or almost everybody, assumed it would be a Trump-Biden rerun of 2020. Asked who they thought would win the presidential election, 50% said Trump, and only 38% said Biden, with a mysterious “someone else” or the Kennedy oddity picked by the rest.

The Crystal Ball’s project concluded with a wave of four polls in June, July, August, and September-October.

The June survey, conducted before the June 27 debate that crushed Democrats’ souls and would ultimately end Biden’s candidacy, indicated a close contest – 46% said Trump would win, 42% said Biden would.

The next survey was conducted July 20-22, a week after Trump’s ear got grazed in Pennsylvania, and coinciding, though only partially, with Biden’s announcement he would step aside (June 21). It was the only one of the polls in the series taken after the debate debacle and while Biden was still in the race, and not surprisingly 54% said Trump would win, while only 32% thought Biden would.

The project’s next polling was conducted between August 20-26, about two weeks after Harris had secured the nomination and otherwise astounded a lot of folks by turning out to be very much more of a boss than was widely thought. The script was flipped: Harris would win, said 56% of the August survey respondents, compared to 40% saying Trump would.

The fourth and last wave of polling, between Sept. 20-Oct. 2, had Harris at 55% to Trump’s 42%.

“This current citizen forecast points to a Harris victory in November,” concludes the Crystal Ball’s “Last Sounding” summary published Wednesday.

“Of course, close races are hard to call,” the summary adds, and citizen forecasting isn’t perfect. The Crystal Ball mentions the elections of 2000 and 2016 as examples.

In both those elections the person who won the presidency lost the popular vote. So this year’s surveys, in addition to asking voters who they thought would win, specifically asked them who they thought would win the electoral college, and the majority still expected a Harris victory.

And on the whole, the track record of citizen forecasting of U.S. presidential election results is sort of startlingly good.

The Sabato Crystal Ball and the American National Election Survey combined encompass a record of citizen forecast polling that stretches back to 1956. In every presidential election since then, “whenever the expectation percentage has exceeded 50%, as is the case with the Harris-Trump race, the forecast of the presidential winner has always been correct,” states the summary released Wednesday.

While the most recent polling in the series was conducted roughly a month before Election Day, that’s been the case throughout the history of the polling series, the report adds.

So Democrats can … take a breath?

Fat chance.

What might be considered a variant of citizen forecasting – betting markets – are also often viewed as being a more reliable predictor than traditional polls, and they indicate a much tighter race than the Crystal Ball citizen forecast, with Harris ahead in some, and Trump in others. (There are also segments of the presidential betting market indicating a generous advantage for Trump, though that may not reflect the wisdom of crowds as much as the machinations of crypto-bros.)

About the same time the last polling in the Crystal Ball series was being done, the Cook Political Report also asked voters in battleground states not who they were voting for, but who they thought would win. Harris was ahead in that poll too – 46% said she would win, compared to 39% for Trump. But that’s below the 50% benchmark history cited by the Crystal Ball.

And even given the aforementioned impressive historical track record of citizen forecast polling, if any modern presidential campaign cycle in the modern era has already proven to be wildly different from all the others, it would be this one.

In other words, let the Democratic hand-wringing continue.

Harris would probably approve. She seems like a leader with a zero tolerance policy for complacency.

GET THE MORNING HEADLINES.

Nevada Current is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Nevada Current maintains editorial independence. Contact Editor Hugh Jackson for questions: info@nevadacurrent.com. Follow Nevada Current on Facebook and X.

Pollsters are turning to AI this election season

30 September 2024 at 10:00
voting sign/polling place/election

As response rates drop, pollsters are increasingly turning to artificial intelligence to determine what voters are thinking ahead of Election Day, not only asking the questions but sometimes to help answer them. (Stephen Maturen | Getty Images)

Days after President Joe Biden announced he would not be seeking re-election, and endorsed Vice President Kamala Harris, polling organization Siena College Research Institute sought to learn how “persuadable” voters were feeling about Harris.

In their survey, a 37-year-old Republican explained that they generally favored Trump for his ability to “get [things] done one way or another.”

“Who do you think cares about people like you? How do they compare in terms of caring about people like you?” the pollster asked.

“That’s where I think Harris wins, I lost a lot of faith in Trump when he didn’t even contact the family of the supporter who died at his rally,” the 37-year-old said.

Pollsters pressed this participant and others across the political spectrum to further explain their stances, and examine the nuance behind choosing a candidate. The researchers saw in real time how voters may sway depending on the issue, and asked follow-up questions about their belief systems.

But the “persuadable” voters weren’t talking to a human pollster. They were conversing with an AI chatbot called Engage.

The speed in which election cycles move, coupled with a steep drop of people participating in regular phone or door-to-door polls, have caused pollsters to turn to artificial intelligence for insights, both asking the questions and sometimes even answering them

Why do we poll? 

The history of polling voters in presidential races goes back 200 years, to the 1824 race which ultimately landed John Quincy Adams in the White House. White men began polling each other at events leading up to the election, and newspapers began reporting the results, though they didn’t frame the results as predictive of the outcome of the election.

In modern times, polling for public opinion has become a business. Research centers, academic institutions and news conglomerates themselves have been conducting polls during election season for decades. Though their accuracy has limitations, the practice is one of the only ways to gauge how Americans may be thinking before they vote.

Polling plays a different role for different groups, said Rachel Cobb, an assistant professor of political science and legal studies at Suffolk University. For campaign workers, polling groups of voters helps provide insight into the issues people care about the most right now, and informs how candidates talk about those issues. It’s why questions at a presidential debate usually aren’t a surprise to candidates — moderators tend to ask questions about the highest-polling topics that week.

For news outlets, polls help give context to current events and give anchors numbers to illustrate a story. Constant polling also helps keep a 24-hour news cycle going.

And for regular Americans, poll results help them gauge where the race is, and either activate or calm their nerves, depending on if their candidate is polling favorably.

But Cobb said she, like many of her political science colleagues, has observed a drop in responses to more traditional style of polling. It’s much harder and more expensive for pollsters to do their job, because people aren’t answering their phones or their front doors.

“The time invested in getting the appropriate kind of balance of people that you need in order to determine accuracy has gotten greater and so and they’ve had to come up with more creative ways to get them,” Cobb said. “At the same time, our technological capacity has increased.”

How AI is assisting in polling?

The speed of information has increased exponentially with social media and 24-hour news cycles, and polls have had to keep up, too. Though they bring value in showing insights for a certain group of people, their validity is fleeting because of that speed, Cobb said. Results are truly only representative of that moment in time, because one breaking news story could quickly change public opinion.

That means pollsters have to work quickly, or train artificial intelligence to keep up.

Leib Litman, co-CEO and chief research officer of CloudResearch, which created the chatbot tool Engage, said AI has allowed them to collect answers so much faster than before.

“We’re able to interview thousands of people within a matter of a couple hours, and then all of that data that we get, all those conversations, we’re also able to analyze it, and derive the insights very, very quickly,” he said.

Engage was developed about a year ago and can be used in any industry where you need to conduct market research via interviews. But it’s become especially useful in this election cycle as campaigns attempt to learn how Americans are feeling at any given moment. The goal isn’t to replace human responses with AI, rather to use AI to reach more people, Litman said.

But some polling companies are skipping interviewing and instead relying on something called “sentiment analysis AI” to analyze publically available data and opinions. Think tank Heartland Forward recently worked with AI-powered polling group Aaru to determine the public perception of artificial intelligence.

The prediction AI company uses geographical and demographic data of an area and scrapes publicly available information, like tweets or voting records, to simulate respondents of a poll. The algorithm uses all this information to make assertions about how a certain demographic group may vote or how they may answer questions about political issues.

This type of poll was a first for Heartland Forward, and its executive vice president Angie Cooper said they paired the AI-conducted poll with in-person gatherings where they conducted more traditional polls.

“When we commissioned the poll, we didn’t know what the results were going to yield,” she said. “What we heard in person closely mirrored the poll results.”

Sentiment Analysis

The Aaru poll is an example of sentiment analysis AI, which uses machine learning and large language models to analyze the meaning and tone behind text. It includes training an algorithm to not just understand literally what’s in a body of text, but also to seek out hidden messaging or context, like humans do in conversation.

The general public started interacting with this type of AI in about 2010, said Zohaib Ahmed, founder of Resemble AI, which specializes in voice generation AI. Sentiment analysis AI is the foundation behind search engines that can read a request and make recommendations, or to get your Alexa device to fulfill a command.

Between 2010 and 2020, though, the amount of information collected on the internet has increased exponentially. There’s so much more data for AI models to process and learn from, and technologists have taught it to process contextual, “between-the-lines” information.

The concept behind sentiment analysis is already well understood by pollsters, says Bruce Schneier, a security technologist and lecturer at Harvard University’s Kennedy School. In June, Schneier and other researchers published a look into how AI was playing a role in political polling. 

Most people think polling is just asking people questions and recording their answers, Schneier said, but there’s a lot of “math” between the questions people answer and the poll results.

“All of the work in polling is turning the answers that humans give into usable data,” Schneier said.

You have to account for a few things: people lie to pollsters, certain groups may have been left out of a poll, and response rates are overall low. You’re also applying polling statistics to the answers to come up with consumable data. All of this is work that humans have had to do themselves before technology and computing helped speed up the process.

In the Harvard research, Schneier and the other authors say they believe AI will get better at anticipating human responses, and knowing when it needs human intervention for more accurate context. Currently, they said, humans are our primary respondents to polls, and computers fill in the gaps. In the future, though, we’ll likely see AI filling out surveys and humans filling in the gaps.

“I think AI should be another tool in the pollsters mathematical toolbox, which has been getting more complex for the past several decades,” Schneier said.

Pros and cons of AI-assisted polling 

AI polling methods bring pollsters more access and opportunity to gauge public reaction. Those who have begun using it in their methodology said that they’ve struggled to get responses from humans organically, or they don’t have the time and resources to conduct in-person or telephone polling.

Being interviewed by an anonymous chatbot may also provide more transparent answers for controversial political topics. Litman said personal, private issues such as health care or abortion access are where their chatbot “really shines.” Women, in particular, have reported that they feel more comfortable sharing their true feelings about these topics when talking to a chatbot, he said.

But, like all methodology around polling, it’s possible to build flaws into AI-assisted polling.

The Harvard researchers ran their own experiment asking ChatGPT 3.5 questions about the political climate, and found shortcomings when it asked about U.S. intervention in the Ukraine war. Because the AI model only had access to data up through 2021, the answers missed all of the current context about Russia’s invasion beginning in 2022.

Sentiment analysis AI may also struggle with text that’s ambiguous, and it can’t be counted on for reviewing developing information, Ahmed said. For example, the X timeline following one of the two assassination attempts of Trump probably included favorable or supportive messages from politicians across the aisle. An AI algorithm might read the situation and conclude that all of those people are very pro-Trump.

“But it doesn’t necessarily mean they’re navigating towards Donald Trump,” Ahmed said. “It just means, you know, there’s sympathy towards an event that’s happened, right? But that event is completely missed by the AI. It has no context of that event occurring, per se.”

Just like phone-call polling, AI-assisted polling can also potentially leave whole groups of people out of surveys, Cobb said. Those who aren’t comfortable using a chatbot, or aren’t very active online will be excluded from public opinion polls if pollsters move most of their methods online.

“It’s very nuanced,” Ahmed said of AI polling. “I think it can give you a pretty decent, high-level look at what’s happening, and I guarantee that it’s being used by election teams to understand their position in the race, but we have to remember we exist in bubbles, and it can be misleading.”

Both the political and technology experts agreed that as with most other facets of our lives, AI has found its way into polling and we likely won’t look back. Technologists should aim to further train AI models to understand human sentiment, they say, and pollsters should continue to pair it with human responses for a fuller scope of public opinion.

“Science of polling is huge and complicated,” Schneier said. “And adding AI to the mix is another tiny step down a pathway we’ve been walking for a long time using, you know, fancy math combined with human data.”

GET THE MORNING HEADLINES.

❌
❌