Google proves that social media can slow down the spread of fake news
During the COVID-19 pandemic, the public faced a completely different threat: what UN Secretary-General António Guterres has called the “pandemic of misinformation”. Misleading propaganda and other fake news can easily be shared on social networks, threatening public health. One in four adults said they weren’t getting the vaccine. While we in the United States finally have enough doses to achieve herd immunity, too many people are worried about the vaccines (or skeptical that COVID-19 is even a dangerous disease) to reach that threshold.
However, a new study by the Massachusetts Institute of Technology and Google’s social technology incubator Jigsaw gives hope to fix misinformation on social networks. In a massive study of 9,070 American participants who controlled gender, race, and bias, researchers found that a few simple UI interventions can stop people from sharing fake news about COVID-19.
How? Not through “education” teaching them the difference between reliable and lousy sources. And not through content that has been “flagged” as incorrect by fact checkers, as Facebook has tried to do.
Instead, the researchers introduced several different prompts via a simple pop-up window, each with a single goal: to get people to think about the accuracy of sharing. Taking into account the accuracy of a story, people were up to 20% less likely to spread fake news. “It’s not that we come up with an intervention that you give people once and they’re set,” says MIT professor David Rand, who was also the lead author of the study. “Rather, the point is that the platforms by nature keep people distracted from accuracy.”
An early prototype accuracy prompt asked users to think about the accuracy of a news headline before continuing to browse. [Image: Jigsaw]At the start of the experiment, participants were given a pop-up prompt, such as a prompt to rate the accuracy of a neutral headline. One example was, “‘Seinfeld’ is officially coming to Netflix.” This was just to make her think about accuracy. They were then presented with higher stakes content related to COVID-19 and asked if they would share it. Examples of the COVID-19 headlines people had to analyze were “Vitamin C Protects Against Coronavirus” (false) and “CDC: The Coronavirus Can Continue To Spread Through 2021, But The Effects May Be Mitigated” (true). People prepared to ponder the accuracy of headlines were less likely to share false COVID-19 content.
“Often times people are pretty good at telling what is true and what is wrong. And people, by and large, say they don’t want to share inaccurate information, ”says Rand. “But they may do it anyway because they’re distracted, because the social media context is drawing their attention to other things [than accuracy]. “
An animated version of Jigsaw’s “Digital Literacy Tip” experience: Variations of this design were tested for effectiveness in several dimensions. [Image: Jigsaw]What other things? Baby photos. A friend’s new job posting. The ubiquitous social pressure from likes, shares and followers counts. Rand explains that all of these things add up and the design of social media distracts us from our natural judgment.
“Even if you care about accuracy and are generally a critical thinker, the social media context just turns that part of your brain off,” says Rand, who then recounted a time last year he discovered that he had shared an inaccurate story on the internet, even though he is actually a researcher on the subject.
MIT pioneered research theory. Then Jigsaw stepped in to work on the work and fund it while she used her designers to create the announcements. Rocky Cole, research program manager at Jigsaw, says the idea is “incubating” in the company and he can’t imagine it being used in Google products until the company makes sure the work doesn’t have unintended consequences Has. (Meanwhile, Google subsidiary YouTube is still a dangerous haven for extremist misinformation fueled by its own suggestive algorithms.)
Through the research, MIT and Jigsaw developed and tested several small interventions that could help restore a person to a sensible, critical state of mind. One approach has been called “evaluation”. All that came down to it was asking someone to do their best to judge whether a sample headline appeared correct. This prepared their demanding mode. And if subjects saw a COVID-19 headline after preparing, they were far less likely to pass on misinformation.
Another approach was called “Tips”. It was just a small box that prompted the user to “Be skeptical of headlines. Examine the source. Watch out for unusual formatting. Check the evidence. ”Another approach was called“ Importance, ”and it simply asked users how important it was for them to only share accurate stories on social media. Both approaches helped curb the spread of misinformation by around 10%.
One approach that didn’t work was based on partisan norms, which was a call to explain how important it was for both Republicans and Democrats to only share accurate information on social media. Interestingly, if that “norms” approach was mixed up with the “tips” approach or the “importance” approach, guess what? Tips and meaning became more effective. “The general conclusion is that you can do a lot of different things that advance the concept of accuracy in different ways, and they all work pretty well,” says Rand. “You don’t need a special magical method to do this.”
The only problem is that we still don’t understand a key piece of the puzzle: how long do these prompts work? When does its effect wear off? Are users starting to hide them?
“I would guess [these effects are] pretty short-lived, ”says Cole. “The theory suggests that people care about accuracy. . . But they see a cute cat video online and suddenly they don’t think about accuracy anymore, they think of something else. ”And the more accuracy prompts that appear, the easier it is to ignore them.
These unknowns point the way for future research. We now know that we have tools that can be easily integrated into social media platforms to curb the spread of misinformation.
To ensure that people keep sharing accurate information, websites might require a constant feed of new ways to get users to think about the accuracy. Rand points out a prompt Twitter posted during the last presidential election. He thinks this call is very well drafted as it asks readers if they would like to read an article before retweeting it and reminds them of the issue of accuracy. But Twitter hasn’t updated Command Prompt in the many months since, so it’s likely to be less effective, he says. “The first time [I saw that] it was like ‘Whoa! Shit! ‘”Says Rand. “Now it’s like, ‘yeah, yeah’.”