Type to search

Social Media

How Facebook Algorithms Promote Hate and Toxic Content

Share

Facebook is in the spotlight for both the right and the wrong reasons. The wrong reason is that what should be a small configuration change took time Facebook, Instagram and WhatsApp down last week for a couple of hours. It affects billions of users and shows us how important Facebook and other tech giants have become to our lives and other businesses. The much more important topic is of course Whistleblower Frances Haugen, a former employee who publishes tens of thousands of pages of internal Facebook documents. The documents show that the leadership of Facebook is repeating prioritized profit over social good. Facebook’s algorithms polarized society, promoting hatred and fake news because they drove “engagement” on their platforms. That it tears up communities and even vulnerable young teenagers because it doesn’t matter to Facebook to have “perfect” bodies.

The Wall Street Journal did published detailed exposés citing internal documents from Facebook and Frances Haugen, the whistleblowers who worked on CBS 60 minutes and Hearings in Congress. “On Facebook I have seen time and again that there are conflicts of interest between what is good for the public and what is good for Facebook.” Haugen told CBS. “And Facebook has always decided to optimize for its own interests, such as making more money.”

The 37-year-old data scientist submitted Eight whistleblower complaints against Facebook with the Security and Exchanges Commission (SEC) with the help of a non-profit organization, Whistleblower Aid. These complaints are backed by hard evidence: tens of thousands of internal documents that she secretly copied before leaving Facebook.

Why is this big news when these issues have been brought up over and over, even more prominently after Cambridge Analytica? Didn’t we always know how Facebook, WhatsApp and other platforms have become powerful tools to promote hatred and divisive politics? Have UN investigators do not blame Facebook for the genocidal violence against Rohingyas? Haven’t we seen similar patterns during the communal riots in Muzaffarnagar?

The big news is that we now have evidence that Facebook was aware of what its platform was doing. We got it out of the horse’s mouth: internal Facebook documents that Haugens made public.

By giving preference to posts that encourage “engagement” – that is, people who read, like, or respond to posts on Facebook, WhatsApp, and Instagram – Facebook ensures that people stay on its platform for much longer. Facebook users could then be “sold” more effectively to advertisers and more and more ads would be shown. Facebook’s business model is not to promote news, friendly conversation among users, or the entertainment of people. It sells us to those who can sell us goods. And like Google, it knows a lot better who we are and what we can buy. That gives Facebook 95% of its sales and do it one of only six trillion dollar companies (September 2021) in terms of market capitalization.

Haugens testified before Congress that Facebook uses artificial intelligence to find dangerous content. the problem reads: “Facebook’s own research says that dangerous content cannot be adequately identified. And as a result, these dangerous algorithms, which they admit, pick up on the extreme feelings, the split[s]… ”

That this happened is well known and has even been discussed in our columns. Facebook’s response was to set up an independent board of directors to oversee and employ a variety of fact-checkers. This and other processes would help filter out hate mail and fake news. What they were hiding was that this was all cosmetic. The traffic controller or what You see in your feed in the sense of Facebookget involved with– are controlled by algorithms. And these algorithms were designed to promote the most poisonous and divisive posts as it drives engagement. Increasing engagement is the main driver behind Facebook’s and algorithms thwarts any action to detoxify its contents.

Haugens’ testimony before Congress tells us the real problems with Facebook and what governments need to do to protect citizens. “This should not focus on the individual pieces of content that people are presenting, but on Facebook’s algorithms. It also brings back the instruments, the countries in their kitten have to discipline facebook“(Added italic). These are the “Safe Harbor” laws that protect intermediaries such as Facebook who do not create content themselves, but rather provide their platform for so-called user-generated content. In the United States, this is Section 230 of the Communications Decency Act; in India it is Section 79 of the Information Technology Act.

In the US, a revision of Section 230 would blame the social media giant for its algorithms. In Haugen’s words“If we had adequate oversight or if we reformed” [section] 230 to blame Facebook for the consequences of their deliberate ranking decisions, I think they would abolish engagement-based ranking … Because it makes teenagers exposed to more anorexic content, it tears families apart and some Places like Ethiopia literally fuel ethnic violence. ” The main problem is not the hateful content that users generate on Facebook. it is Facebook’s algorithmswho continuously post toxic content on our Facebook feed to maximize their advertising revenue.

The widespread dissemination of toxic content on Facebook’s platforms is of course aided by the deliberate neglect of language tests in other European languages. Although Hindi has the fourth highest and Bengali the fifth highest number of speakers according to Haugens, Facebook doesn’t have enough language checkers in these two languages.

In these columns we have explained why Divisive content and fake news are more viral than others. Haugens, with thousands of pages of internal research on Facebook, confirms what other serious researchers and we have been saying all along. The algorithms that Facebook and other digital tech companies are using today don’t directly code rules to drive engagement. Instead, they use machine learning, or something called artificial intelligence, to create these rules. It is the goal –increasing commitment– that creates the rules that then lead to the toxic content in our feeds, tearing societies apart and damaging democracy. We now have hard evidence, thousands of pages of Facebook’s internal research reports, that this actually happened. Worse still, Facebook leadership, including Mark Zuckerberg, was fully aware of the problem.

Not all damage on the Facebook platform was caused by algorithms. We find that from Haugen’s documents Facebook had a “white list” of Facebook users whose content would be advertised even if it violated the guidelines. Millions of such “special” users could break the rules of Facebook with impunity. We previously wrote about the Wall Street Journal evidence on how Facebook India is protected BJP numbers despite repeated flags regarding their posts were collected within Facebook.

This is not all that Haugen’s treasury of Facebook’s internal documents reveals. Following research by cigarette manufacturers on how to bind children to smoking at a young age, Facebook had researched what it called “pre-teens,” children aged 9-12. Your research was how to Retaining young people on Facebook’s platforms so that they would have an endless supply of new consumers. This despite their internal research, which shows that the platforms are promoted by Facebook Anorexia and other eating disorders, depression and suicidal ideation among the young.

All of this should break Facebook. But it’s a trillion dollar company and one of the largest in the world. Its fat treasury, coupled with the power it wields in politics and its ability to “Hack” elections, offer the protection that big business receives under capitalism. The cardinal sin that capital cannot tolerate is lying to other capitalists. The internal documents that Haugens submitted to the Security and Exchanges Commission (SEC) could finally lead to a pushback against social media giants and their regulation. If there is no strong regulation, it could at least put some weak restrictions on the algorithms that encourage hatred.

Finally, I’ll quote from a ten-year-old interview with a young tech geek. Jeff Hammerbacher, a 28-year-old Silicon Valley connoisseur, said, “The best minds of my generation think about how to get people to click on ads.” This is what drives the march of the social media giants to their trillions

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *