Facebook’s ban on political advertisements ahead of Election Day is likely to suppress important speeches.
This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at the American University Washington College of Law, which examines how technology affects the way we think about language.
On Thursday, Facebook announced a series of changes to the US election season in terms of political advertising and content moderation. While Mark Zuckerberg argued that these changes should help Facebook’s part “to protect our democracy”, we fear that they will have the opposite effect, suppressing important political speeches and at most neutering election campaigns at an important moment in the election.
At the beginning, Facebook banned new political advertisements on October 27, a week before the US elections on November 3. Because the ban is for a period of time and only applies to new ads, it’s far more restrictive than the restrictions announced by Twitter in 2019 that banned most political ads. As we pointed out when we announced Twitter, bans are likely to muzzle important political speeches and put a disproportionate burden on Challenger campaigns, while more powerful incumbents with large organic reach will benefit on social media platforms. (We should also note that one of us, Matt, previously worked at Facebook as the Director of Public Policy.)
Allegedly, the intent of this policy is to force political advertisers to post all of their advertising in advance so that third parties such as fact checkers and rivals for political office can review them. That’s a good goal. Transparency helps inform the public about the election dynamics and makes people more accountable for fraudulent practices.
But in practice, the blackout period is likely to suppress important political speeches. Facebook will post new ads just as voter mobilization is at its peak, hindering election campaigns that use social media advertising to provide voters with critical time-sensitive information and reminders in major states ahead of an election, such as “Early voting ends tomorrow”. These appeals are particularly important in mobilizing hard-to-reach and less politically engaged voters such as young people.
Additionally, a blackout will prevent campaigns this past week on Facebook from responding to the latest events through paid speech.
For example, during the 2016 election, FBI Director James Comey announced his plan to investigate a new pool of information on Hillary Clinton’s email server on October 28, then stated on November 6 that he had found no evidence of wrongdoing have. (The poll was held on November 8th.) Under the new Facebook policy, campaigns based on the October letter could run ads, but not new ads about the announcement that Clinton had done nothing wrong.
In other words, what this blackout is likely to mean in practice is a de facto ban on campaigns that respond to current events on Facebook. (At least campaigns on other platforms like Google and TikTok and through messages will still be able to react.) It will likely also mean a clear ban on counter-speech in the last week before the election.
Campaigns and consultancies will post huge amounts of ad content on Facebook just before the deadline to give them maximum flexibility during the new ad blocking period. With such a deluge of new ads, it becomes difficult to review and diminish the value of their transparency. Additionally, the blackout will prevent competing campaigns from generating new ads that respond to things like pre-deadline attacks or false claims.
Who is likely to reward all of this? Candidates with a large fan base on Facebook who can organically disseminate their speech and counter-speech. And those changes are likely to benefit far-right and fake news content that spreads organically through engagement.
In addition to the embargo, Facebook announced that it would highlight its Voting Information Center at the top of users’ newsfeeds and take additional steps to address election disinformation, false claims about victory and premature declarations of election results. While providing accurate and reliable voter information at the top of the newsfeed is a commendable move, its impact can be limited if voters have to find it on their own. As an expert working group recently recommended, a better solution is to send reliable information to users’ feeds so that it can be incorporated into the content they see as they scroll.
Facebook’s new guidelines include giving more control to false claims over electoral terms and extending the elimination of voter suppression to implicit and explicit voting misrepresentations. These are promising, but critical details remain uncertain. How will Facebook define “explicit” or “implicit” attempts to stifle voting? What are the reasons for removing the content or leaving it with a label? How long do these guidelines apply? Months if there is a competitive choice? So far, the labeling has been particularly confusing. When Facebook flags content, it is simply not clear to users whether the company classified the statement as incorrect, contextual, or interesting enough to encourage further action. For example, when Trump asked on Facebook to vote both by mail and in person, the social network’s response was to add a label stating that voting by mail is safe. But that label doesn’t specifically say that Trump was encouraging something illegal or spreading false information.
The company’s continued volatility also makes it hard to be confident that this morning’s policies apply. Just a few months ago, Facebook refused to remove or flag Donald Trump’s posts on elections and racist violence, claiming that an incumbent president’s political speech should not be conveyed by a tech platform. Now Facebook is much more aggressive against the president’s speech, removing allegations it considers harmful and labeling others. Each approach has its advantages, but switching from one to the other in just a few months makes it difficult to understand the company’s principles and makes it difficult for users to know what to expect from the platform. These recent changes are particularly strange as they come less than a year after Zuckerberg made a major political speech in which he attempted to outline a clear philosophy of free expression for the platform.
The reality is we don’t have the data we need to evaluate Facebook’s decision making. Hence, it is simply not clear whether any of these policy or enforcement changes were correct, or whether they were sufficient to undo voting disinformation while preserving the speech that is an integral part of free and fair elections. Facebook and others are encouraging new research in this area, but it’s important to get more accurate data on the costs and benefits of paid online language before deciding to discontinue it.
Better decisions are likely to come from better data. For example, most academic research shows that political advertising on social media is generally used for mobilization and that persuading more people to vote is difficult.
Farm supply stores are running out of a COVID cure for horse deworming / pseudoscience
America’s Trains Catch-22
There is a silver lining in the extremely grim new climate report
The unexpected life form that is literally a gold mine
As we said last year, several alternative approaches are more promising. Companies could add product features that make it easier to participate in counter-speeches, for example by allowing competing campaigns to serve ads to the same audience. Businesses could also focus on ensuring advertisers don’t violate their existing policies by stealthily targeting to undermine voice integrity. Finally, they could ensure that they do not benefit financially from paid election speeches by channeling all political advertising revenue to nonprofits working on electoral integrity or to their own electoral integrity products.
Ultimately, election speeches should be made by governments, not private companies. Facebook’s power to set the terms for political speech shows that the US has failed to create a regulatory framework that ensures free and fair elections – a cornerstone of democracy. We know that Mark Zuckerberg respects his company’s role in democracy, but two months before the most momentous choice in our lives is not the time to experiment.
Future Tense is a partnership between Slate, New America, and Arizona State University studying emerging technologies, public policy, and society.