How AI helps us fact-check misinformation on the air

Artificial intelligence is a fraught topic for journalists — just ask the guy who <ahem> “wrote” this year’s summer reading list for the Chicago Sun-Times.
But for all its risks, AI also presents opportunities we are just now starting to understand. For example, Wisconsin Watch has been an early user and partner with Gigafact on an AI-powered tool they have built that can help analyze the thousands of hours of podcasts, social media videos and talk radio programs that could be spreading misinformation every day.
The tool, known as Parser, can process an hourlong audio file in a matter of minutes and not only provide a transcript, but also identify specific claims made during the audio segment and even the person making the claim.

Wisconsin Watch fact briefs reporter Tom Kertscher has been using Parser to make it easier to find surprising and dubious claims. Before Parser he would listen to those hourlong podcasts and radio shows himself, trying to pick up on what Wisconsin politicians were saying. In tracking how much time it took to produce a fact brief, we found in some cases almost half the time was spent just searching for a claim.
Parser has sped up that process, making it possible to scan through far more audio recordings of interviews.
“We can cover so much more ground with Parser, checking many more politicians and interviews than we could manually,” Kertscher said.
Gigafact began developing Parser after Wisconsin Watch provided that feedback on how much time it can take to stay on top of every claim that every politician makes. But the problem of misinformation is far bigger than just keeping tabs on politicians.

Last year the investigative journalism class at UW-Madison worked on a project about talk radio in Wisconsin. One of the key findings was the notable amount of misinformation being spread on the airwaves, especially among conservative pundits.
To do that project, students spent a significant amount of time listening to six radio hosts whose viewpoints spanned the political spectrum. They took four hours for each host from the week after the Super Bowl — 24 hours of audio total — and manually processed the audio into a database of claims. Even with a transcription tool, the process took easily over 100 hours to produce a list of claims to fact-check.
Earlier this year, I worked with Gigafact using Parser to process 24 hours from the same hosts the week after this year’s Super Bowl. We came up with a list of claims in two hours.
Wisconsin Watch and Gigafact presented that case study in using AI at a recent Journalism Educators Institute conference hosted by the University of Wisconsin-Madison’s School of Journalism and Mass Communication. We’ll present it again this week at the Investigative Reporters and Editors conference in New Orleans.
And if you haven’t read it yet, add our investigative journalism project Change is on the Air to your summer reading list. Unfortunately, for the students who devoted so many hours to listening and re-listening to those talk radio hosts, it was not produced using AI. But maybe next time.

Wisconsin Watch is a nonprofit, nonpartisan newsroom. Subscribe to our newsletters for original stories and our Friday news roundup.
How AI helps us fact-check misinformation on the air is a post from Wisconsin Watch, a non-profit investigative news site covering Wisconsin since 2009. Please consider making a contribution to support our journalism.