Type to search

Social Media

Study: The masses can find out more about fake news

Share

With serious misinformation concerns, social media networks and news organizations often use fact checkers to tell the truth from the wrong. But fact checkers can only evaluate a small fraction of the stories circulating on the Internet.

A new study by MIT researchers suggests an alternative approach: crowdsourced accuracy assessments from groups of ordinary readers can be practically as effective as the work of professional fact-checkers.

“One problem with fact-checking is that there is just too much content for professional fact-checkers to be able to cover in a reasonable time,” said Jennifer Allen, graduate student at MIT Sloan School of Management and co-author a newly published paper detailing the study.

But the current study, which examined over 200 news reports flagged for further scrutiny by Facebook’s algorithms, may have found a way to address this problem by using relatively small, politically balanced groups of lay readers to make the headlines and evaluate guiding principles of news reports.

“We found it encouraging,” says Allen. “The average rating of a crowd of 10 to 15 people correlated with both the judgments of the fact checkers and the fact checkers among themselves. This helps with the scalability problem, as these evaluators were normal people with no fact-checking training and just read the headlines and guide sentences with no time to research. “

This means that the crowdsourcing method could be used widely and inexpensively. The study estimates that the cost of having readers rate news this way is about $ 0.90 per story.

“There’s nothing that solves the Internet hoax problem,” says David Rand, professor at MIT Sloan and lead co-author of the study. “But we are working on adding promising approaches to the anti-misinformation toolkit.”

The article “Scaling up Fact-Checking Using the Wisdom of Crowds” was published in Science Advances. The co-authors are Allen; Antonio A. Arechar, researcher at the MIT Human Cooperation Lab; Gordon Pennycook, Assistant Professor of Behavioral Science at the Hill / Levene Schools of Business at the University of Regina; and Rand, Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Science at MIT and Director of MIT’s Applied Cooperation Lab.

A critical mass of readers

To conduct the study, the researchers used 207 news articles that a Facebook internal algorithm identified as needing review, either because there was reason to believe they were problematic or simply because they were widespread or addressed important issues such as health. The experiment deployed 1,128 U.S. citizens using Amazon’s Mechanical Turk platform.

These participants were given the headline and motto of 20 news reports and asked seven questions – how much the story was “accurate”, “true”, “reliable”, “trustworthy”, “objective”, “unbiased” and “describing”[ing] an event that actually happened “- to generate an overall accuracy rating for each message.

At the same time, three professional fact checkers received all 207 stories – asked to rate the stories after researching them. In agreement with other studies on fact checking, the evaluations of the fact checkers were strongly correlated, but their agreement was far from perfect. In about 49 percent of the cases, all three fact-checkers agreed on the correct judgment about the facticity of a story; in about 42 percent of the cases, two of the three fact-checkers agreed; and about 9 percent of the time, the three fact-checkers each had different ratings.

Interestingly, when the regular readers recruited for the study were divided into groups with the same number of Democrats and Republicans, their average ratings correlated strongly with the ratings of the professional fact-checkers – and with at least a double-digit number of readers involved, the ratings of the crowd correlated just as strongly with the fact checkers as well as with the fact checkers among themselves.

“These readers weren’t trained in fact checking, and they just read the headlines and guiding principles, and still kept up with the performance of the fact checkers,” says Allen.

While it may seem surprising at first that a crowd of 12 to 20 readers can match the performance of professional fact checkers, this is another example of a classic phenomenon: the wisdom of the crowd. In a variety of applications, lay groups have been found to meet or exceed the performance of expert judgments. The current study shows that this can also occur in the highly polarizing context of identifying misinformation.

Participants in the experiment also took a political knowledge test and a test of their propensity for analytical thinking. Overall, the assessments of people who were better informed about citizens’ questions and who thought more analytically were more in agreement with the fact checkers.

“People who thought more and were more knowledgeable were more likely to agree with fact-checkers,” says Rand. “And that was true regardless of whether they were Democrats or Republicans.”

Participation mechanisms

The scientists say the finding could be applied in many ways – and note that some social media giants are actively trying to get crowdsourcing up and running. Facebook has a program called Community Review, which hires lay people to rate news content; Twitter has its own project, Birdwatch, to solicit input from readers about the accuracy of tweets. The wisdom of the masses can either be used to apply publicly visible labels to content, or to inform ranking algorithms and inform what content is actually displayed to people.

Of course, according to the authors, any organization that uses crowdsourcing needs to find a good mechanism for engaging readers. If participation is open to everyone, it is possible that the crowdsourcing process is unfairly influenced by partisans.

“We haven’t tested this in an environment where anyone can log in,” notes Allen. “Platforms shouldn’t necessarily expect other crowdsourcing strategies to produce equally positive results.”

On the flip side, Rand says, news and social media organizations need to find ways to get a large enough group of people to actively rate news for crowdsourcing to work.

“Most people don’t care about politics and don’t care enough to try to influence things,” says Rand. “But the concern is that if you let people rate whatever content you want, the only people who do it will be the ones who want to play the system. More important to me than being inundated by zealots, the problem is that nobody would. It’s a classic public good problem: society as a whole benefits from people identifying misinformation, but why should users bother the time and effort giving reviews? “

The study was supported in part by the William and Flora Hewlett Foundation, the John Templeton Foundation, and the Omidyar Group’s Luminate Project Limited reset project. Allen is a former Facebook employee who still has a financial interest in Facebook; other studies by Rand are partially supported by Google.

– This news release was originally posted on the Massachusetts Institute of Technology website

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *