Type to search

Social Media

Social Media and Online Speech: How Should Countries Regulate Tech Giants?

Share

The role of social media and online speech in civil society is being increasingly explored. The deadly January 6 riot in the U.S. Capitol is just one example of violence, most of which was stoked on social media platforms, according to national security experts. In other parts of the world, social media has contributed to religious and ethnic violence, including against Muslims in India and Rohingya in Myanmar. Harmful misinformation, including about the COVID-19 pandemic, has also spread easily and quickly.

More from our experts

The TSA should regulate pipeline cybersecurity US innovation and economic recovery

Platforms like Facebook and Twitter have become de facto public places in many countries, and governments are taking different approaches to regulate them.

How do the major platforms regulate content?

More to:

Digital policy

Social media

Censorship and freedom of expression

The most popular platforms, most of which are operated by US companies, have similar content moderation guidelines. You block posts that glorify or encourage violence; Posts that are sexually explicit; and posts that contain hate speech which they define as, among other things, attacking a person because of their race, gender, or sexual orientation. The major platforms have also taken steps to limit disinformation, including fact-checking posts, flagging state media accounts, and banning political advertising.

These platforms generally adhere to the laws of the countries in which they operate, which can limit the language even further. Not only do Facebook, Twitter, and YouTube (owned by Google) use artificial intelligence-based moderation software, they also use thousands of people to review posts for violations.

What are the controversies?

Critics say these platforms don’t enforce their rules consistently. Both Twitter and Facebook, for example, have accounts that they say are in the public interest – especially those of politicians like former US President Donald J. Trump – allowed to post abusive or misleading content that might have been removed if they were posted by a common man user.

More from our experts

The TSA should regulate pipeline cybersecurity US innovation and economic recovery

15,000

The number of moderators Facebook uses to review content on its services.

Source:

NYU Stern Center for Business and Human Rights

Facebook Twitter LinkedIn email


share

In Trump’s case, companies instead added fact checks to some of his posts, which some social media and misinformation experts criticized as inadequate. The two platforms eventually banned Trump after the riots in the U.S. Capitol, but both have been criticized for failing to take similar measures overseas. YouTube has also come under fire for allegedly being more lenient to its star users who generate more revenue. It has also been criticized for failing to quickly remove videos with false claims about US election fraud and other misinformation.

Critics say companies have no incentive to regulate hateful or violent speech because their ad-driven business models are based on motivating users. At the same time, politicians in some countries, including the United States, argue that social media companies have gone too far in moderation, at the expense of free speech.

More to:

Digital policy

Social media

Censorship and freedom of expression

A weekly round up of the latest CFR news on the week’s biggest foreign policy stories with briefings, opinions and statements. Every Friday.

For their part, social media companies have argued that their policies are difficult to enforce. It can sometimes be difficult to distinguish, for example, hate speech from satire or commentary. Some companies say the responsibility for writing rules for the internet shouldn’t lie with them and have called for government regulation.

How are governments around the world approaching the issue?

In the United States, social media platforms have largely been left to develop and enforce their own policies, despite Washington weighing new laws and regulations. Other countries have introduced or proposed laws to force social media companies to do more to control online discourse. Authoritarian governments generally have more restrictive censorship regimes, but even some Western democracies like Australia and Germany have taken tougher approaches to the online language.

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *

Next Up