The sad reason deepfakes pose little threat to US politics
MIT researchers, who worked with funding from Google’s Jigsaw group, recently conducted two studies to determine the potential impact political advertising using deepfake AI technology could have on US voters.
In total, more than 7,500 people in the United States took part in the paired studies – as far as we can determine, this is the largest series of experiments on the subject.
The participants were divided into three groups; one group watched a deepfake video, the second read a text transcript of the video, and the third acted as a control group so they received no media prompts.
In the next phase, participants in all three groups were asked if they believed the media they had seen or read or if they agreed with certain statements.
The results suggest a bad news, worse news scenario. Let’s start with bad news.
According to the paper:
Overall, we find that people are more likely to believe that an event has occurred when presented in video rather than text.
This might not blow you off your feet, but it turns out that people are more likely to believe what they see than what they read. Obviously, that’s a bad thing in a world where deepfakes are so easy to create.
But there is much worse to come. Again according to the paper:
Additionally, the difference between the video and text conditions in terms of attitudes and engagement is comparable, if not smaller, than the difference between the text and control conditions. Taken together, these results challenge popular beliefs about the unique persuasiveness of political video over text.
In other words, people in the US are more likely to believe deepfake than textual fake news, but it does little to change their political opinion.
The researchers are quick to warn against drawing too many conclusions from these data. They warn that the conditions in which the studies were conducted do not necessarily mimic those in which US voters are likely to be deceived by deepfakes.
According to the paper:
It should be noted, however, that while we observe little differences in persuasiveness between video and text in our two studies, the effects of these two modalities may diverge more widely outside of an experimental context.
In particular, it is possible for videos to be more attention grabbing than text so that people scrolling social media are more likely to pay attention and therefore be exposed to video versus text.
As a result, even if video has a limited persuasive advantage over text in a controlled setting with forced choice, it could still exert an overwhelming influence on attitudes and behavior in an environment in which it is disproportionately noticed.
Okay, so it’s possible that deepfakes in the wild could be a lot more effective at getting people to change their political opinions.
But this particular research provides evidence to the contrary. From our point of view, it makes perfect sense.
In the last election, more US citizens cast their votes than anyone else in US history. But the margins were so tight that one side is still (idiotically) claiming the election was rigged. In fact, two of the last four US presidents lost the referendum. This suggests that US voters are far from fickle.
Obviously, deepfakes are pretty low on the list of US policy issues. However, it is a bit sad to see how easily our country’s partisanship has been codified by MIT research.