Is media literacy the magic word for fake news?
With the proliferation of online news and social media over the past decade, a whole new category of information has entered the popular lexicon: fake news. Online disinformation has come a long way from “nobody on the internet knows you are a dog” to where a malicious article from a troll farm can stand side by side with legitimate, rigorous journalism. Awareness of the problem is widespread, but preventing misinformation requires social media platforms, journalists, fact checkers, and citizens to take action.
One solution that has been touted for preventing the spread of disinformation is “media literacy,” but what is it and how effective can it really be?
Media literacy is essentially critical thinking. It may seem superfluous to teach media literacy and critical thinking in schools where these things are presumably already taught, but the UK Fake News and Critical Skills Teaching Commission, run by the All-Party Parliamentary Group (APPG) on Literacy and the National Literacy Trust, found that only 2% of children have the critical reading and writing skills they need to tell if a message is real or fake.
It is clear that in markets like the UK in particular, Facebook wants to see how it can capitalize on the fact that it doesn’t have to play by the same rules as television channels.
If media literacy is essential to navigating online media, it should be a compulsory part of schooling – which is likely to limit high standards of literacy. In Finland, the fact check organization Faktabaari teaches media literacy and fact check in schools. The materials are only intended for children up to 9th grade (15-16 years old), which means that there is only limited scope for teaching. Programs aimed at even younger audiences, like the planned BBC My World produced by Angelina Jolie, are so broad and aimed at such a young audience that they are unlikely to have much of an impact.
On the other hand, if media competence is not imparted but voluntarily, then it is pure self-selection and not a solution to the problem. Those interested in learning and discovering misinformation are likely the ones who are least likely to be deceived by it.
While it is encouraging that groups like UNESCO are issuing manuals for journalists to combat fake news, they are generally not the most vulnerable to it. Typically, journalists already have to secure their coverage and work with editors and editors who collect information that is not trustworthy.
The core of the problem is that disinformation opens up various psychological areas, such as the theory of social identity and the feeling of belonging to a group or social isolation. One of the ways people are most vulnerable is around topics that they think they are knowledgeable about but are not.
Studies have questioned the extent to which better media literacy can help combat this type of bias. A study in the Journal of Experimental Psychology found that repeated statements are easier to process and therefore perceived as more truthful than new statements – even if the study participants knew beforehand that the statements were false.
What could be beneficial to the public are more comprehensive digital hygiene programs, like Facebook’s Digital Literacy Library, which breaks big topics down into smaller pieces (e.g., individual privacy settings and peer-to-peer behavior) that are more appropriate for public advertising campaigns.
More viable solutions will emerge when the problems are approached at a higher level. An NPR poll found that the American public views misleading information as the greatest threat to election security, but that the solution lies more with the media, tech companies, and government than the public.
What can the media do to combat misinformation? Organizations like Reuters publish guides on how to spot fake news, and The Guardian prominently shows the date an article was published on its social media thumbnails. This happened after highlighting the date bar on articles but still realizing that often Facebook users only see a shared post, but don’t dig deeper and look at the article for themselves. The change was intended to prevent users from improperly contextualizing reporting.
The boundary between a news platform and a social media platform can also be unclear. Traditional media platforms often strictly regulate political messages and advertising, while social media platforms have far fewer restrictions. While Facebook is the leading social media platform, it also runs a journalism project and acts as a gatekeeper for which messages are accessible to its users. For this purpose, Facebook will continue to see itself in a journalistic role, but will not moderate political advertising on its platform.
Especially in markets like the UK, it’s clear that Facebook – which has vowed not to remove inflammatory or misleading political ads – wants to see how it can take advantage of the fact that it doesn’t have to play by the same rules as a television broadcaster.
Meanwhile, platforms like Twitter and Spotify have banned political advertising to combat fake news. But these guidelines also apply to smaller political organizations that lack the ability (or financial strength) to advertise anywhere other than online. Social media platforms are still the cheapest way to advertise.
There will certainly not be a “one size fits all” solution. It is not possible to simply urge citizens not to fall victim to misinformation campaigns by hostile intelligence agencies using manipulative psychological tactics. There is no easy way to get people to gain a higher level of self-awareness and understanding of their own prejudices. When it comes to freedom of expression and open debate, regional and cultural differences also matter and governments need to find solutions that work specifically for them.
Media literacy is a good start, but not the end of the story.