Are extremism and “fake news” on social media to blame? – Orange County Register
Congressmen will be grilling big tech CEOs tomorrow at a hearing titled “Disinformation Nation: The Role Of Social Media In Promoting Extremism and Disinformation.”
This is no surprise.
The last two federal elections and the siege of Capitol Hill as well as the spread of conspiracy theories surrounding Covid-19 have certainly shown how social media users spread misinformation and propaganda on the platforms. But is the problem really just big tech’s fault?
This question should be kept in mind as lawmakers prepare to use this hearing as a stage for reforms that will inevitably force social media platforms to censor innocent users and restrict free online discourse.
Social media platforms have become dynamic information carriers. It’s easy to disseminate information quickly and bring users to an unprecedented variety of news sources, all consolidated in a single place, enabling real-time interaction with a broad community of co-users, including friends, family and acquaintances.
These features have brought tremendous benefits to both users and media companies, with the latter relying on them to stay afloat in the face of the decline of traditional print media. Likewise, social media can help spread misinformation, aided by the false legitimacy of endorsement by otherwise trustworthy friends, colleagues, and family members who share, like, or comment on posts.
And while they have enabled the formation and growth of user communities that congregate behind common interests and opinions, it also holds the potential for nefarious elements to spread messages of hate, conspiracy theories, or extremist ideologies.
To combat these problems, large companies like Facebook have used content moderation teams and independent “fact checkers” and advanced algorithms to capture fake news. However, getting rid of the problem completely is a pipe dream.
It is simply not possible to check in real time every piece of information or every point of view that billions of users post. Even if an algorithm made this possible, it would also likely capture innocent or innocent speech, as well as controversial opinions, which are still important to public discourse given the rapidly evolving manner of understanding on various topics with the benefit of new or previously disregarded information . Excessive moderation is also likely to undermine the basic functionality of the platforms by preventing real-time discussions.
Look at COVID-19. Even trusted expert agencies like the Centers for Disease Control and Prevention have been forced to change or withdraw their own recommendations on social distancing and masking based on new information and experience. Holding social media companies accountable for allowing users to question the official positions of these experts could prevent conspiracy theorists from making unfounded, fear-driven claims that could undermine public health. But it could also prevent legitimate questions from being raised that are confirmed with hindsight.
Oddly enough, even “fact checking” on social media can hurt those who fail to make factual claims or spread misinformation, such as artists and satirists, as the existing misinformation filters show. Similarly, attempts to use algorithms to crack down on extremist content have inadvertently captured and censored journalists sharing videos in an attempt to expose human rights abuses around the world.
In reality, “fake news” and weird propaganda in front of social media, and the inherent psychological properties that make them attractive and help them spread, cannot be dampened by overregulating the platforms. During the Civil War, the New York Herald was accused of deliberately igniting tensions between the North and the South to sell more newspapers by falsely claiming that George Washington’s body had been removed from his grave in the mountains of Virginia.
The tendency of social media engagement to reward and highlight information that confirms a person’s pre-existing views, rather than necessarily rewarding accurate or unbiased information, is a consequence of human nature. Not the websites or apps themselves.
The difference between the era of the printed and digital platforms is that users are not just exposed to partisan news that takes their prejudices into account. If you want, you can easily summarize conflicting sources and viewpoints in one place, be it on a news-only platform like Google News or on social media sites like Facebook.
When Google News pulled out of Spain in 2014, the overlap between audiences across the various channels decreased significantly – the kind of fragmentation that prevents narratives and prejudices from being challenged.
And just as social media could do more good than harm by exposing people to sources they would otherwise not seek, security experts also credit public social media for helping monitor and monitor extremism. This becomes more difficult when extremists are pushed from public platforms to encrypted messaging applications or the dark web. These narrow spaces act as echo chambers for extremism by reinforcing topics of conversation and strengthening ideologies.
Public resentment about the perceived censorship of conservative viewpoints is likely to intensify as the largely liberal crusade for more censorship to combat “fake news” gains momentum.
Sure, companies could tweak their algorithms so that highlighted comments and stories aren’t primarily ones that appeal to users’ biases or emotions. But even this is not a guaranteed solution and can have unintended consequences.
A better idea is to simply educate future generations of users to be skeptical and critical thinking, without expecting big brothers or tech giants to be perfect arbiters of falsehood. As Abraham Lincoln famously said, “Don’t believe everything you read on the Internet.”
Satya Marar is a Senior Contributor and Tech Policy Fellow at Young Voices. His writings on technology and innovation have been featured in the Washington Examiner, Washington Times, The Hill, and RealClearPolicy.