2024 election year. Deepfakes, a weapon of mass deception?

In 2024, it is estimated that about a quarter of the world's population will go to the polls. This raises concerns about potential misinformation and fraud, which might exploit artificial intelligence.

Malicious actors can use these tools to influence election results, while experts fear the effects of widespread use of deepfakes, warns Phil Muncaster of of the digital security company ESET.

deepfake

Σχεδόν δύο δισεκατομμύρια άνθρωποι θα πάνε στις κάλπες φέτος για να ψηφίσουν τους αντιπροσώπους και τους ηγέτες που προτιμούν. Σημαντικές εκλογικές αναμετρήσεις θα διεξαχθούν σε πολλές χώρες, όπως στις ΗΠΑ, το Ηνωμένο Βασίλειο και την Ινδία. Επίσης το 2024, θα πραγματοποιηθούν οι εκλογές για το . These electoral contests may change the political landscape and the direction of geopolitics for years to come – and beyond.

From theory to practice

It is worrying that deepfakes are likely to influence voters. In January 2024, a deepfake audio message of US President Joe Biden was released via robocall to an unknown number of New Hampshire primary voters.

The message urged voters not to go to the polls and instead "save your vote for the November election." The caller ID number displayed was also spoofed to make it appear that the automated message was sent from the personal number of Kathy Sullivan, a former state Democratic Party chairwoman who now heads a committee to support the Joe Biden candidacy.

It's not hard to see how such calls could be used to dissuade voters from going to the polls to vote for their preferred candidate ahead of November's presidential election.

The risk will be high in lopsided election contests, where the shift of a small number of voters from one side to the other determines the outcome. Such a targeted campaign could cause incalculable damage by influencing a few thousand voters in important states that may decide the outcome of the election.

The threat of disinformation via deepfakes

Misinformation and disinformation were recently ranked by the World Economic Forum (WEF) as the number one global risk for the next two years. The report warns: "Synthetic content will manipulate individuals, damage economies and divide societies in numerous ways over the next two years ... there is a risk that some governments will act too slowly, facing a trade-off between preventing disinformation and of the protection of freedom of speech".
(Deep)faking it

Tools like ChatGPT and Genetic Artificial Intelligence (GenAI) have enabled a wider set of people to participate in creating disinformation campaigns using . With the help of artificial intelligence, malicious actors have more time to work on their messages, enhancing their efforts to ensure that their fake content gets published and heard.

In the context of election contests, deepfakes could obviously be used to erode voter confidence in a particular candidate. After all, it's easier to convince someone not to do something than the opposite.

If the supporters of a political party or candidate can be properly influenced by fake audio or video, it would give a sure win to the opponents. In some cases, rogue states may seek to undermine confidence in the democratic process so that whoever wins finds it difficult to govern.

At the heart of the process is a simple truth: when people process information, they tend to value quantity and ease of understanding. This means that the more content we see with a similar message and the easier it is to understand, the more likely we are to believe it.

This is why marketing campaigns consist of short and constantly repeating messages. Add to this the fact that distinguishing deepfakes from real content is becoming increasingly difficult, and you have a potential recipe for the destruction of democracy.

What are tech companies doing about it?

Τόσο το YouTube όσο και το Facebook ανταποκρίθηκαν αργά σε ορισμένα deepfakes που είχαν στόχο να επηρεάσουν πρόσφατες εκλογές. Αυτό συμβαίνει παρά τη νέα νομοθεσία της ς Ένωσης (Digital Services Act) που απαιτεί από τις εταιρείες μέσων κοινωνικής δικτύωσης να πατάξουν τις απόπειρες χειραγώγησης των εκλογών.

For its part, OpenAI said it will implement the Coalition for Content Proof and Authenticity (C2PA) digital credentials for images produced by DALL-E 3. The cryptographic watermarking technology – also being tested by Meta and Google – is designed to make it harder to produce fake images.

However, these are small steps and there are legitimate concerns that the response to the threat will be insufficient and delayed as election fever grips the planet.

Especially when the propagation of deepfakes takes place in relatively closed networks, in groups or through phone calls, it will be difficult to detect and quickly refute any fake audio or video.

The theory of "anchoring bias" suggests that the first piece of information people hear is the one that sticks in their minds, even if it turns out to be false, says ESET's Muncaster.

If deepfakers reach out to swing voters first, no one knows who the ultimate winner will be. In the age of social media and misinformation, the famous 18th-century Anglo-Irish political columnist, cleric and satirist Jonathan Swift's saying "the lie gallops and the truth comes panting behind it" takes on a whole new meaning.

iGuRu.gr The Best Technology Site in Greecefgns

every publication, directly to your inbox

Join the 2.087 registrants.

Written by newsbot

Although the press releases will be from very select to rarely, I said to go ... because sometimes the authors are hiding.

Leave a reply

Your email address is not published. Required fields are mentioned with *

Your message will not be published if:
1. Contains insulting, defamatory, racist, offensive or inappropriate comments.
2. Causes harm to minors.
3. It interferes with the privacy and individual and social rights of other users.
4. Advertises products or services or websites.
5. Contains personal information (address, phone, etc.).