Type to search

Effects

Researchers have found an easy way to reduce hate speech online

Share

Online hate speech has been a problem since the dawn of the internet, although the problem has metastasized in recent years. For the past five years, social media has fueled right-wing extremism, and more recently it has lured Trump supporters into attempting a violent coup after losing in 2020, based on misinformation largely circulated online.

The January 6 riots prompted Twitter and other social media platforms to lock Trump’s accounts as companies were asked how they were in to protecting people from harm and propaganda by maintaining a culture that encourages free expression Want to bring harmony.

Given the limitations of digital platforms, it is appropriate to be skeptical of such efforts. After all, social media benefit from our communication, regardless of the type; and social media companies lack power in that they generally don’t produce their own media, but collect and curate the words of others.

However, a new study by New York University researchers found that a relatively simple step on behalf of a social media site could have a huge impact on the impact and spread of hate speech. Their study included sending alerts online to Twitter users who had posted tweets depicting hate speech.

The study, published in the journal Perspectives on Politics, examined whether warning users that they are at risk of being held accountable could reduce the spread of hate speech. The researchers based their definition of “hateful language” on a dictionary of racial and sexual slurs. After the researchers then identified 4,300 Twitter users who followed accounts that had been banned for posting hateful language, the researchers sent warning tweets from their own accounts that they know (although in slightly different forms) let that “the user” [@account] They follow has been banned, and I suspect this was due to hateful language. “A separate control group received no messages at all.

The study was conducted during a week in July 2020 amid protests against Black Lives Matter and the COVID-19 pandemic – and thus when there was a significant amount of hate speech against Black and Asian Americans on the internet.

The result? Twitter users who received a warning reduced the number of hate speech tweets they tweeted by up to 10 percent the following week; if the warning messages were worded politely, users would do so by up to 15 or 20 percent. The study found that people were more likely to reduce their hateful speech when the tweeter exudes a sense of authority. Since there was no significant decrease within the control group, this suggests that people change their bad behavior if they are told they could be held accountable and more likely to see a warning as legitimate if it comes from someone who who is credible and polite.

Would you like a daily recap of all the news and comments the salon has to offer? Subscribe to our morning newsletter Crash Course.

Researchers added that these numbers are likely to be underestimated. The accounts the researchers used had at least 100 followers, which gave them limited credibility. Future experiments could see how things change if accounts with more followers or Twitter employees get involved.

“We also suspect that these are conservative estimates, in the sense that an increase in the number of followers on our account could lead to even higher effects,” the authors write, citing other studies.

Unfortunately, a month after the warnings were published, they had lost their effect. The tweeters immediately tweeted hate speech again with similar conditions as before the start of the experiment.

“Part of the motivation for this paper that led to the development of the research design was trying to think about whether there are options other than simply banning people or throwing them off platforms,” ​​New York University Professor Joshua A. Tucker , the co-author of the paper, said Salon. “There are many concerns that if you throw people off platforms for a period of time, they will go elsewhere. They could keep using hateful language and other content, or they could come back and get annoyed about it even more. ”Think in a way, this was motivated by the idea of ​​thinking about the multitude of options there are to overall hate on these platforms to reduce.”

Although social media sites function as if they were free spaces for expression, curation of content on private platforms is not prohibited by the First Amendment. In fact, the Constitution just forbids the government from punishing people for using language. Private companies have the right to enforce language codes for both their employees and their customers.

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *