Type to search

Social Media

Broad bipartisan support for addressing misinformation on social media

Share

There is widespread bipartisan support among self-proclaimed liberals and conservatives that social media companies should add warnings to posts that contain misleading information or that could lead to the spread of misinformation, data from a new study by Northeastern researchers from the College of Arts, Media and Design.

Much of the content labeling polls were conducted around the US presidential election. However, the results of the national survey, released on Wednesday, could spark new concerns about misinformation during the COVID-19 pandemic, particularly regarding vaccine and other health protocol use, says John Wihbey, Associate Professor of Journalism and Media Innovation at Northeastern and co-author of the study.

We are in a new moment, in a new phase of the pandemic – a moment when we get a little more pure sense of what the public thinks about these issues, ”says Wihbey.

In recent years, social media companies like Twitter and Facebook have started flagging millions of posts as misinformation, including some from former President Donald Trump, who was persecuted by the U.S. Capitol on January 6 after the January 6 attack Platforms was suspended building that was committed by his followers.

Trump’s allegations of widespread electoral fraud during last year’s presidential election, which have been exposed, and the ensuing uprising sparked a heated debate over the responsibility of tech companies to monitor the type of information users may share, including restricting or removing what is called Counterfeit news, hate speech, and content that is otherwise considered problematic.

Over the summer, the team of Northeastern researchers surveyed more than 1,400 people in the United States using an academic survey platform called Prolific. Half of the respondents said they used Twitter occasionally or more often, and 68% said they used Facebook occasionally or more often.

The survey was carried out jointly with. released Northeasterns Ethics Institute as part of a wider effort Investigation of potentially new approaches to labeling content on social media platforms. The studyCo-authors include Garrett Morrow, a political science graduate student; Myojung Chung, Assistant Professor of Journalism and Media Advocacy; and Mike Peacey, Associate Professor of Economics.

John Wihbey

John Wihbey, Associate Professor of Journalism and Media Innovation, poses for a portrait. Photo by Matthew Modoono / Northeastern University

The study found that 92.1% of liberals, 60.1% of conservatives, and 78.4% of moderates “strongly or somewhat agree” that social media platforms should use labels to inform users about posts that are being posted contain misleading information. Such labels have been used to identify misinformation such as: Twitter’s “fact check” labels, and warn users of potentially graphic or harmful content, such as: Sensitive media warnings.

Participants also stated that they encountered “problematic content”.misleading or false information and hate speechoften while using the social platforms. The researchers don’t try to define misinformation or problematic content in the study, says Wihbey, but rather rely on participants’ perception of such problems when answering the survey questions.

The researchers also note that participants exhibited high levels of “overconfidence bias,” which means that they trusted their own ability to spot misleading statements and misinformation online, but expressed distrust in the ability of others to do the same .

The significant bipartisan agreement on the labeling came as a bit of a surprise, says Wihbey, considering how polarizing the topic of content moderation was in the days after the election. Many conservatives opposed the Trump banand generally prohibitsay censorship is about to come.

But the study also confirmed some of these partisan disagreements over the best approach to content moderation: 63.2% of Conservatives said flagging Trump’s posts rather than banning them was enough to deal with his “hurtful messages”. That compares to the more than 80% of Liberals who thought that tougher action was needed.

The study comes as governments try to control the moderation policies of tech giants. Just this week, Texas Governor Greg Abbott, a Republican, signed a bill that require social media companies to disclose their content moderation policies and initiate objection proceedings for banned users. Under the new law, users could sue companies to have their accounts restored. Florida approved a similar law earlier this year.

Democrats have also sought to influence corporate policies. In the summer, President Joe has Biden pushed Facebook is to move faster on posts spreading COVID-19 misinformation, saying the bad information circulating on the platform about the safety and effectiveness of the vaccines is “killing people”.

There is an enormous need to find out which instruments and methods we need to combat disinformation and misinformation, ”says Wihbey, summarizing the mood of the survey. “At the same time, I think people don’t believe that closing accounts and disabling share buttons is the right thing to do [only] Way to go. ”

Wihbey says the study may suggest that the public is struck some sort of middle ground.

“We find that people want labels to link them to credible sources for review, prepare them to think critically about misinformation, and slow the spread of misinformation by warning people of the content they may be trying to get share, ”explain the authors.

For media inquiries, please contact media@northeastern.edu.

Tags:

Leave a Comment

Your email address will not be published. Required fields are marked *