Type to search

Importance

Why using Facebook should require a media literacy test

Share

We don’t let people start driving until they have completed driver training and passed a test, and for good reason: Vehicles are dangerous for drivers, passengers and pedestrians. Social networks and the misleading and harmful content they distribute are also dangerous to society, so some level of media literacy education – and a test – should be a condition of their use.

Social media companies like Facebook and Twitter would certainly reject such an idea, calling it onerous and extreme. But they deliberately misunderstand the enormous threat that misinformation poses to democratic societies.

The Capitol Rebellion gave us a glimpse into the kind of America caused by misinformation – and illustrated why it’s so dangerous. On January 6th, the nation witnessed an unprecedented attack on our seat of government in which seven people were killed and lawmakers feared for their lives. The rioters who wreaked havoc planned their march on the Capitol on social media, including Facebook groups, and turned violent through months of disinformation and conspiracy theories about the presidential elections, which they believe were “stolen” by Donald Trump caused.

While the major social networks have made significant investments in combating misinformation, it may be impossible to remove all or even most of it. For this reason, the time has come to shift the focus from efforts to contain and spread misinformation to providing tools for people to recognize and reject.

Media literacy should definitely be taught in schools, but also where people actually come across misinformation – in social networks. Large social networks that spread news and information should require a short media literacy course followed by a quiz before logging in. The social networks should be legally obliged to do so.

Moderation is difficult

So far we have relied on the major social networks to protect their users from misinformation. They use AI to locate and delete misleading content, flag or reduce its circulation. The law even protects social networks from being sued for content moderation decisions they make.

However, it is not enough to rely on social networks to control misinformation.

First of all, the tech companies that run social networks often have a financial incentive to keep misinformation out. The content delivery algorithms they use prefer non-partisan and often half-true or untrue content because they consistently get the most engagement in the form of likes, shares, and comments from users. It creates ad views. It’s good for business.

Second, large social networks are forced into an endless process of expanding censorship as propagandists and supporters of conspiracy theories find more and more ways to spread false content. Facebook and other companies (like Parler) have learned that a purist approach to freedom of expression – that is, allowing any speech that is not illegal under US law – is not practical in digital spaces. The censorship of some types of content is responsible and good. In its most recent surrender, Facebook announced on Monday that it would ban all posts on debunked theories about vaccines (including those against COVID-19), such as: B. that they cause autism. But it is impossible even for well-meaning censors to keep up with the endless ingenuity of disinformation providers.

There are logistical and technical reasons for this. Facebook relies on 15,000 (mostly contract) content moderators to monitor the posts of its 2.7 billion users worldwide. And it is increasingly turning to AI models to find and moderate malicious or false posts, but the company itself admits that these AI models don’t even understand some types of malicious language, such as in memes or videos can.

For this reason, it may be better to help social content users identify and reject misinformation and refrain from disseminating it.

“I recommended that the platforms conduct media literacy training directly on their websites,” says disinformation and content moderation researcher Paul Barrett, assistant director of the Stern Center for Business and Human Rights at New York University (NYU). “There is also the question of whether there should be a media literacy button on the website that stares you in the face so that a user can access media literacy data at any time.”

A quick primer

Social media users young and old are in dire need of tools to identify both misinformation (innocent distribution of false content out of ignorance of the facts) and disinformation (incorrect content knowingly distributed for political or financial reasons), including the ability to find out who created a piece of content and analyze why.

These are important elements of media literacy, which also includes the ability to match information with additional sources, assess the credibility of authors and sources, recognize the presence or absence of strict journalistic standards, and create and / or share media in ways that do this reflects its credibility, according to the United Nations Education, Science and Culture Organization (UNESCO).

Putting together a toolkit of basic media literacy tools – perhaps specific to “news literacy” – and presenting them directly on social media sites serves two purposes. It equips social media users with handy media literacy tools to analyze what they are seeing and also alert them that they are likely to come across biased or misleading information on the other side of the login screen.

This is important because social networks not only provide misleading or untrue content, they also deliver it in a way that can defuse a user’s bullshit detector. The algorithms used by Facebook and YouTube favor content that is likely to provoke an emotional, often partisan, reaction from the user. And when a member of Party A comes across a news report of a shameful act by a leader in Party B, they can believe it and then pass it on without realizing that the ultimate source of information is Party A. Often times, it is the creators of such content that bend (or break completely) the truth in order to maximize the emotional or partisan response.

This works very well on social networks: A 2018 study by the Massachusetts Institute of Technology of Twitter content found that falsehoods are 70% more likely to be retweeted than truths, and falsehoods spread about six times faster to reach 1,500 people than the truth.

But media skills training also works. The Rand Corporation conducted a review of available research on the effectiveness of media literacy training and found ample evidence in numerous studies that test subjects were less likely to fall for the wrong content after various media literacy training sessions. Other organizations, including the American Academy of Pediatrics, the Centers for Disease Control and Prevention, and the European Commission, have come to similar conclusions and strongly recommend media literacy training.

Facebook has already taken some steps to promote media literacy. It has partnered with the Poynter Institute to develop media literacy training tools for children, millennials and seniors. The company also donated $ 1 million to the News Literacy Project, which teaches students to review article procurement, make and criticize news judgments, spot and analyze viral rumors, and spot confirmation bias. Facebook also operates a “Media Literacy Library” on its website.

But everything voluntarily. Requiring training and a quiz as a prerequisite for admission to the site is a different matter. “The platforms would be very reluctant to do this because they would be concerned about rejecting users and reducing engagement,” said NYU’s Barrett.

If social networks do not act voluntarily, they could be forced to require media literacy training from a regulator such as the Federal Trade Commission. From a regulatory perspective, this might be easier to do than get Congress to require media literacy education in public schools. It could also be a more targeted way of mitigating Facebook’s real risks compared to other proposals like winding up the company or removing its protections against lawsuits based on user content.

Americans became aware of misinformation when the Russians armed Facebook with a gun to interfere in the 2016 election. But while Robert Mueller’s report proved that the Russians were spreading misinformation, the causality between this and the actual election decisions remained blurred. For many Americans, January 6th made the threat of disinformation to our democracy real.

With more direct tangible damage from misinformation on social networks, it becomes even more apparent that people need help fine-tuning their bullshit detectors before they sign up.

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *