Type to search

Top Companies

Targeted ads are not only annoying, they can also be harmful. That’s how it’s done

Share

Five years after the Brexit vote and three since the Cambridge Analytica scandal, we now know what role targeted political advertising can play in promoting polarization. In 2018, it was revealed that Cambridge Analytica had used data from 87 million Facebook profiles without user consent to help Donald Trump’s 2016 election campaign target key voters with online advertisements.

In recent years we have learned how this type of targeted advertising can create political filter bubbles and echo chambers that are suspected of dividing people and increasing the spread of harmful disinformation.

But the vast majority of advertisements exchanged online are commercial, not political. Commercial targeted advertising is the main source of income in the internet economy, but we know little about how it affects us. We know that our personal information is collected to support targeted advertising in a way that violates our privacy. But aside from privacy considerations, how else could targeting harm us – and how could this harm be prevented?

These questions have motivated our current research. We have found that targeted online advertising also divides and isolates us by preventing us from collectively reporting ads that we disagree. We do this in the physical world (maybe when we see an ad at a bus stop or train station) by making regulators aware of harmful content. However, online consumers are isolated because the information they see is limited to what is aimed at them.

Unless we fix this bug by preventing targeted advertisements from separating us from feedback from others, regulators cannot protect us from online advertisements that could harm us.

Due to the sheer volume of ads exchanged online, human managers cannot review every campaign. As a result, machine learning algorithms are increasingly checking the content of ads and predicting the likelihood that they will be harmful or not up to standard. However, these predictions can be biased and usually only prohibit the most obvious violations. Among the many ads that pass these controls, a significant portion still contains potentially harmful content.

Traditionally, advertising standards authorities have taken a reactive approach to regulating advertising, based on consumer complaints. Take Protein World’s 2015 “Beach Body” campaign, which featured on billboards on the London Underground with a bikini clad model next to the words “Are you beach body ready?”. Many commuters complained, saying it encouraged harmful stereotypes. Shortly thereafter, the ad was banned and a public inquiry into socially responsible advertising was launched.

Regulate advertising

The Protein World case shows how regulators work. As they respond to consumer complaints, the regulator is open to examining how advertisements conflict with perceived social norms. As social norms evolve over time, this helps regulators keep up with what the public considers to be harmful.

Consumers complained about the ad because they believed it was promoting and normalizing a harmful message. However, it has been reported that only 378 commuters have filed complaints with the regulator, out of the hundreds of thousands who are likely to have seen them. The question arises: what about all the others? If the campaign had taken place online, people would not have seen posters defaced by disgruntled commuters and might not have been asked to question their message.

Are you ready for the beach body of the protein world? Ad cannot appear in its current form http://t.co/CVdVCqLDJQ pic.twitter.com/TCfPKTKqyU

– British Vogue (@BritishVogue) April 30, 2015

If the ad could have targeted even the subset of consumers who were most receptive to their message, they might not have made complaints. As a result, the harmful message would have gone unchallenged and would have missed an opportunity for the regulator to update its guidelines in line with current societal norms.

Sometimes ads are harmful in a specific context, such as when ads for high-fat foods are aimed at children or when ads for gambling are aimed at people with a gambling addiction. Targeted ads can also be harmful if they are left out. This is the case, for example, when ads for shoes crowd out job or health ads that someone might find more useful or even more vital.

These cases can be referred to as contextual damage: they are not tied to any specific content, but rather depend on the context in which the ad is presented to the consumer.

Machine learning algorithms are poor at detecting contextual damage. On the contrary, the way that targeting works actually reinforces it. For example, several audits have uncovered how Facebook allowed discriminatory targeting that exacerbates socio-economic inequalities.

Deep trench

The cause of all of these problems is due to the fact that consumers have a very isolated experience online. We call this a state of “epistemic fragmentation” in which the information available to each individual is limited to what is aimed at them, without the possibility of being in a common space like the London Underground with others to compare.

Due to the personalized targeting, each of us sees different ads. That makes us more vulnerable. Ads can exploit our personal vulnerabilities or withhold opportunities we never knew existed. Because we don’t know what other users are seeing, our ability to look out for other vulnerable people is also limited.

Regulators are currently pursuing a combination of two strategies to address these challenges. First, we see an increasing focus on educating consumers to give them “control” over how they are addressed. Second, there is a trend towards proactive monitoring of advertising campaigns, where the review mechanisms are automated before ads are posted online. Both strategies are too limited.

Instead, we should focus on restoring the role of consumers as active participants in regulating online advertising. This could be achieved by reducing the accuracy of the targeting categories, introducing targeting quotas, or banning targeting altogether. This would ensure that at least some of the online advertising would be seen by more diverse consumers in a common context in which objections to it could be raised and shared.

Following the Cambridge Analytica scandal, the Electoral Commission sought to open up the hidden world of targeted political advertising in the run-up to the 2019 UK elections. Some broadcasters asked their audience to post targeted advertisements on their social media feeds to share with a wider audience. Campaign groups and scientists were able to analyze targeting campaigns in more detail and uncover where ads could be harmful or untrue.

These strategies could also be used for commercial targeted advertising, which would break the epistemic fragmentation that currently prevents us from responding collectively to harmful advertisements. Our research shows that political targeting is not the only thing that causes damage – commercial targeting also requires our attention.

Silvia Milano, Postdoctoral Fellow in AI Ethics, Oxford University; Brent Mittelstadt, Research Fellow in Data Ethics, University of Oxford, and Sandra Wachter, Associate Professor and Senior Research Fellow, Oxford Internet Institute, University of Oxford

This article was republished by The Conversation under a Creative Commons license. Read the original article.

Tags:

You Might also Like

Leave a Comment

Your email address will not be published. Required fields are marked *