In 2024, it is estimated that about a quarter of the world's population will go to the polls. This raises concerns about potential misinformation and fraud, which might exploit artificial intelligence.
Malicious actors can use these tools to influence election results, while experts fear the effects of the widespread use of deepfakes, warns Phil Muncaster from the digital security team at ESET.
Nearly two billion people will go to the polls this year to vote for their preferred representatives and leaders. Important election contests will be held in many countries such as USA, UK and India. Also in 2024, the elections for the European Parliament will take place. These electoral contests may change the political landscape and the direction of geopolitics for years to come – and beyond.
Table of Contents
From theory to practice
It is worrying that deepfakes are likely to influence voters. In January 2024, a deepfake audio message of US President Joe Biden was released via robocall to an unknown number of New Hampshire primary voters.
The message urged voters not to go to the polls and instead "save your vote for the November election." The caller ID number displayed was also spoofed to make it appear that the automated message was sent from the personal number of Kathy Sullivan, a former state Democratic Party chairwoman who now heads a committee to support the Joe Biden candidacy.
It's not hard to see how such calls could be used to dissuade voters from going to the polls to vote for their preferred candidate ahead of November's presidential election.
The risk will be high in lopsided election contests, where the shift of a small number of voters from one side to the other determines the outcome. Such a targeted campaign could cause incalculable damage by influencing a few thousand voters in important states that may decide the outcome of the election.
The threat of disinformation via deepfakes
Misinformation and disinformation were recently ranked by the World Economic Forum (WEF) as the number one global risk for the next two years. The report warns: "Synthetic content will manipulate individuals, damage economies and divide societies in numerous ways over the next two years ... there is a risk that some governments will act too slowly, facing a trade-off between preventing disinformation and of the protection of freedom of speech".
(Deep)faking it
Tools like ChatGPT and Genetic Artificial Intelligence (GenAI) have enabled a wider set of people to participate in creating disinformation campaigns using technology. With the help of artificial intelligence, malicious actors have more time to work on their messages, enhancing their efforts to ensure that their fake content gets published and heard.
In the context of election contests, deepfakes could obviously be used to erode voter confidence in a particular candidate. After all, it's easier to convince someone not to do something than the opposite.
If the supporters of a political party or candidate can be properly influenced by fake audio or video, it would give a sure win to the opponents. In some cases, rogue states may seek to undermine confidence in the democratic process so that whoever wins finds it difficult to govern.
At the heart of the process is a simple truth: when people process information, they tend to value quantity and ease of understanding. This means that the more content we see with a similar message and the easier it is to understand, the more likely we are to believe it.
This is why marketing campaigns consist of short and constantly repeating messages. Add to this the fact that distinguishing deepfakes from real content is becoming increasingly difficult, and you have a potential recipe for the destruction of democracy.
What are tech companies doing about it?
Both YouTube and Facebook have been slow to respond to a number of deepfakes aimed at influencing recent elections. This is despite new European Union legislation (Digital Services Act) requiring social media companies to crack down on attempts to manipulate elections.
For its part, OpenAI said it will implement the Coalition for Content Proof and Authenticity (C2PA) digital credentials for images produced by DALL-E 3. The cryptographic watermarking technology – also being tested by Meta and Google – is designed to make it harder to produce fake images.
However, these are small steps and there are legitimate concerns that the response to the threat will be insufficient and delayed as election fever grips the planet.
Especially when spreading deepfakes in relatively closed networks, in WhatsApp groups or via phone calls, it will be difficult to detect and quickly refute any fake audio or video.
The theory of "anchoring bias" suggests that the first piece of information people hear is the one that sticks in their minds, even if it turns out to be false, says ESET's Muncaster.
If deepfakers reach out to swing voters first, no one knows who the ultimate winner will be. In the age of social media and misinformation, the famous 18th-century Anglo-Irish political columnist, cleric and satirist Jonathan Swift's saying "the lie gallops and the truth comes panting behind it" takes on a whole new meaning.