Facebook is fighting against fake news
With more than 300 million users, India is Facebook’s largest market
“I have seen more pictures of the dead in the last three weeks than in my entire life,” wrote a Facebook researcher in India in 2019 after following the recommendations of the social network’s algorithms for three weeks.
The researcher’s report was part of an internal document cache called The Facebook Papers, recently obtained from the New York Times and other US publications. They show how the social media giant is fighting to tame the avalanche of fake news, hate speech and inflammatory content – including “celebrations of violence” – from India, the network’s largest market.
This was made worse, the New York Times reported, due to insufficient resources in India’s 22 officially recognized languages and a lack of cultural sensitivity.
A Facebook spokesperson told me the results had led the company to “deepen, rigorously analyze” its recommendation systems in India and “add product changes to improve them.”
So is a lack of resources holding back Facebook’s efforts to fight fake news and inflammatory material in India? Facebook has teamed up with 10 fact-checking organizations on site. Articles featured on the social network are fact checked in English and 11 other Indian languages, making it one of the largest networks after the United States.
But the reality is more complex. Fact checking organizations working with Facebook in India say they double-check and flag suspicious messages and posts flagged by users. The network is then expected to stop the distribution of such posts.
“We really have no moral or legal authority over what Facebook does after we tag a message or post,” a senior official from a fact-checking organization told me.
Prime Minister Modi and Facebook boss Mark Zuckerberg in 2015
Plus, fact-checking is only part of Facebook’s efforts to combat misinformation. The problem in India is much bigger: hate speech is rampant, bots and fake accounts associated with India’s political parties and leaders abound, and user pages and large groups are full of inflammatory material directed against Muslims and judges other minorities. Disinformation here is an organized and carefully crafted operation. Elections and “events” like natural disasters and the coronavirus pandemic usually trigger fake news outbreaks.
The story goes on
The fact that Facebook does not review the opinions and statements of politicians for reasons of “freedom of expression and respect for the democratic process” is also not always helpful. “Much of the misinformation on social media in India is generated by politicians in the ruling party. They have the greatest influence, but Facebook doesn’t review it,” said Pratik Sinha, co-founder of Alt News and independent fact-checking website.
So the latest revelations do not surprise most fact checkers and human rights defenders in India. “We’ve always known that. No social media platform is about guilt,” says Sinha.
With an abundance of hate speech, trolling, and attacks on minorities and women, Indian Twitter is a polarized and dark place. WhatsApp, Facebook’s own messaging service, remains the largest provider of fake news and hoaxes in its largest market. YouTube, owned by Google, is home to a lot of fake news and controversial content, but it doesn’t attract the same amount of attention. For example, the site had up to 12 hours of live video fueling conspiracy theories about the death of Bollywood actor Sushant Singh Rajput last year. (Police later ruled that Rajput died by suicide.)
Many inflammatory videos were posted on Facebook about the riots in Delhi in 2019
The problem with Facebook is elsewhere. India is the largest market with 340 million users. It is a universal social media platform that offers users individual pages and forms groups. “The wide range of features makes it more prone to all kinds of misinformation and hate speech,” says Sinha.
The overwhelming bulk of hate speech and misinformation on the social network is expected to be captured by its internal AI engines and content moderators around the world. Facebook says it has spent more than $ 13 billion since 2016 and hired more than 40,000 people on teams and technology around the world on security issues. More than 15,000 people review content in more than 70 languages, including 20 Indian languages, a spokesman said. me.
When users report hate speech, automated “classifiers” – a human-made database that annotates various types of speeches – check them before selected speeches reach human moderators, often third-party providers. “If these classifiers were good enough, they would capture a lot more hate speech with fewer false positives. But they are clearly not, ”says Sinha.
A Facebook spokesman said the company had “made significant investments in technology to find hate speech in a variety of languages, including Hindi and Bengali.”
“As a result, we’ve cut the number of hate speech people watch by half this year. Today it is only 0.05%. Hate speech against marginalized groups, including Muslims, is increasing worldwide. So we’re improving enforcement, “and pledge to update our policies as hate speech develops online,” the spokesman said.
Then there are allegations that Facebook is favoring the ruling party. A series of articles by journalists Cyril Sam and Paranjoy Guha Thakurta from 2018 wrote, among other things, about the “dominant position of the platform in India with more than a little help from friends of Prime Minister Narendra Modi and the BJP”. (The articles also looked at the Congressional Party’s own “relationship with Facebook”.) “A virality-based business model makes Facebook an ally of the ruling governments,” said Guha Thakurta, co-author of The Real Face of Facebook in India.
Many believe that much of the blame lies in the social network’s algorithms, which decide what to display when searching for a topic and which encourage users to join groups, watch videos, and explore new sites.
Alan Rusbridger, a journalist and a member of Facebook’s board of directors, said the board had to address the perceptions of people who believe that “the algorithms reward emotional content that polarizes communities because it is addictive.” In other words, the network’s algorithms make it possible “marginal content to reach the mainstream,” according to Roddy Lindsay, a former data scientist at Facebook.
“This ensures these feeds continue to promote the most addicting, inflammatory content, and presents an impossible task for content moderators who struggle to monitor problematic viral content in hundreds of languages, countries and political contexts,” notes Lindsay.
In the end, Frances Haugen, Facebook product manager and whistleblower, says, “We should have software that is tailored for the people who talk to each other, not computers that enable us to hear from.”