91Ҹ

Skip to main content

AI fact‑checking works, but mostly for progressives

AI fact‑checking works, but mostly for progressives

As social media platforms increasingly lean on artificial intelligence to spot misinformation, new research suggests those tools don’t work equally well for everyone.

Jason Thatcher

Jason Thatcher

In two large online experiments conducted in the U.S. and U.K. during the 2020 and 2022 news cycles, researchers found that AI fact‑checkers generally made people less likely to believe false news more than human fact‑checkers, but mainly among progressive users. Conservatives reacted about the same to AI and human fact-checking, often putting more weight on the reputation of the news source itself.

The study shows that people’s politics strongly shape whether fact-checks—human or AI—actually change minds, saidJason Thatcher, professor of information systems at theLeeds School of Business and co-author of the paper, forthcoming inMIS Quarterly.

“People that are conservative trust humans because they're predictable, they're reliable, they're familiar, whereas perhaps progressives trust the technology,” Thatcher said.

That divide, he added, may help explain why AI fact-checkers persuade some users but not others.

Human vs. AI fact-checkers

The researchers, who includedof Northeastern University’s D’Amore‑McKim School of Business andof Temple University’s Fox School of Business, set out to understand not just whether AI or human fact‑checkers worked better but how people judged the source of a fact check in the first place.

“We weren't interested in which was more effective,” Thatcher said. “We were interested in how people evaluated who did the rating.”

To do that, the researchers conducted two online experiments involving 370 active social media users in the United States and the United Kingdom, designed to reflect how people actually come across news on social media. Participants were shown news posts designed to look like real social media content, similar to what someone might see on Facebook or Reddit.

The posts covered polarizing, widely discussed issues where misinformation often spreads, including climate change, vaccines, immigration and taxes. Some of the news stories were false, and some were accurate, reflecting the mixed information people see online.

The researchers then varied a few key details: whether a post was fact‑checked by an AI system, a human fact‑checker or not at all, and whether the post appeared to come from a high‑ or low‑reputation source. Participants also reported whether they identified as progressive or conservative, allowing the researchers to compare how different groups responded to the same information.

After viewing each post, participants were asked how believable it seemed and whether they would talk about it, comment on it or share it. The researchers ran the same basic experiment in both countries, drawing on news content from the 2020 and 2022 news cycles, to test whether the results held up beyond a single country or political moment.

Fact‑checking isn’t just about facts

Overall, the study found that AI fact‑checkers were more effective than human ones at making people less likely to believe false news, but again, primarily among progressive users. Conservatives, on the other hand, reacted about the same to AI and human fact checks and tended to rely more heavily on the reputation of the news source itself.

The research also found that fact‑checking can be more complicated when false claims come from well‑known or trusted sources, particularly when human fact‑checkers are involved.

Taken together, Thatcher said, the findings point to a basic challenge in fighting misinformation: It’s not just about getting the facts right but about trust.

“One fact‑checking system is probably not going to work for everyone,” he said. “The solution is having more than one way of providing evidence, considering the source of information and helping people reach their own conclusions.”