Type to search

Effects

Fact checks in newspapers, especially critical ones, can help keep political ads honest

Share

In the run-up to and during the election campaign, newspapers run ad watches that comment on the correctness of the advertisements by political candidates. In new research Patrick C. Meirick Researches how effective these ad monitors are in ensuring that political ads stay honest. He notes that the more ad observations a race had – and the more explicitly critical they were, the greater the likelihood that these ads will be accurate and not exaggerate a candidate’s centrism.

False claims in political campaigns have long been a cause for concern, but that concern has escalated in recent years with the ride of populist candidates like Donald Trump. At the same time, fact checks have gained an increasing presence in political discourse at the national level. The news talked about fact checks 10 to 20 times more often in 2012 than in 2001 and 2002. In state-level political campaigns, however, the verification of allegations in political advertisements has largely fallen to newspaper advertisements. Previous research has shown that when the media is commenting on the veracity of a political advertisement, Ad Watches can effectively disprove misleading claims made in advertisements, but the question of whether fact-checking can actually keep campaigns honest is just beginning.

Our study analyzed ads and newspaper ad ads from eight competing races in the U.S. Senate that were observed to varying degrees by ads. The first hurdle in such a study is measuring honesty. A pair of experts from each of the eight states evaluated 10 advertisements from each candidate for the veracity of their claims and their portrayal of the candidates’ ideologies. In parliamentary elections in a two-party system, it is common for their favored candidates to be shown more moderately in advertisements than they actually are. At the same time, advertisers tend to portray opponents as extreme representatives of their parties: Democratic candidates as left-wing liberals, Republicans as right-wing conservatives. We used voting logs and the experts’ assessments to determine the actual ideology of the candidates, and then calculated the difference between the presented and actual ideology. We checked tenure, profit or loss margin, ad spend per capita, and the tone of the ad.

We found that the more ads watched, the more accurate the ads in the race, and the less exaggerated the centrism of their preferred candidates. Surprisingly, however, in races with more ad monitors, ads also showed opposing candidates to be more extreme than their positions warranted. In retrospect, this may have been mainly due to two campaigns that had little coverage and tried to portray, for example, a Democrat running for election in a liberal state as more conservative than she really was, and not as extremely liberal.

Photo by Kaboompics .com from Pexels

We counted the total number of ad monitors in each race, but ad monitors differ in how they approach them. Some expressly criticize misleading advertising. Some speculate about the strategy or effectiveness of an ad. Some report competing claims by campaign officials in coverage of “he-said, she-said”. Some might do more than one of these things. So we also coded each ad and summed up how many contained explicit criticism, strategy discussion, and reporting in each race.

Explicit criticism in ad monitors seemed to work best for keeping campaigns honest. It has been associated with greater ad accuracy and less exaggerated centrism by preferred candidates, as has the total number of ad monitors, but without the unfortunate association with exaggerated extremism by opponents. He-said / she-said the coverage was mixed: It was associated with greater advertising precision, but also with a somewhat exaggerated candidate focus. Strategy coverage, if any, backfired. It was linked to some exaggerated adversarial extremism, and it wasn’t linked to the other two indicators for Ad Veracity.

Ad watches were inspired in part by a desire to get negative advertising under control, and other studies have found they disproportionately question negative ads. In our study, negative ads were again examined about half as often by adwatches as positive ads. Ad monitors don’t seem to make a campaign any less negative, however. The number of ad monitors was unrelated to overall ad tone or the number of negative ads in a race. It may be that the campaigns’ fear of review, which could invite negative ads, was offset by their desire for free “ad boost”.

But do negative ads deserve their bad name? Are positive ads necessarily more virtuous? In our study, positive ads were rated as more accurate, but also more prone to exaggerating the centrism of the supported candidates compared to other ads.

A disclaimer here: we cannot be certain that ad awareness has led to the increased honesty of advertisements. In fact, they are likely to influence each other: strong ad coverage should make campaigns more honest, but dishonest campaigns should lead to stronger ad coverage. Studying these dynamics in real time would be fascinating. However, given our findings, it makes more sense to conclude that ad monitors held campaigns honest than to conclude that newspapers responded to honest campaigns by introducing or stepping up ad monitors and criticism. In addition, the ad surveillance in the races we selected all began in the summer before the general election, suggesting that the races were “mandatory”.

We hope our study will give more news editors a reason to start using or expand their use of ad monitors, because while ad monitors exist for most U.S. Senate races, they are less common in U.S. House competitions. We hope these ad monitors are ineffective because explicit criticism seems to work best while other types of content have had an ambivalent or counterproductive relationship to ad honesty. However, we also believe that Ad Watches should pay attention not only to certain statements in advertisements, but also to their general narrative and the characterization of the candidates, especially in the case of positive advertisements.

Please read our comment policy before leaving a comment.

Note: This article represents the views of the author and not the position of USAPP – American Politics and Policy or the London School of Economics.

Shortened url for this post: http://bit.ly/2YxdtX9

About the author

Patrick C. Meirick – University of Oklahoma
Patrick C. Meirick is Associate Professor of Communication and Director of the Center for Political Communication at the University of Oklahoma. His research interests include public opinion, political misperceptions, political advertising and media effects. He is co-author of A Nation Fragmented: The Public Agenda in the Information Age (2019, Temple University Press) with Jill A. Edy.

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *