Voting without Trust: Deepfakes and Elections in the Age of AI

Mo Dhaliwal
Skyrocket Digital
Jul 30, 2024
No items found.
Jul 30, 2024
6
min
Technology
Voting without Trust: Deepfakes and Elections in the Age of AI
No items found.

Deepfake technology threatens democratic processes by creating realistic but false media, undermining trust in elections and institutions. This makes it challenging to discern genuine information from manipulative content. Efforts to counteract deepfakes include developing detection tools and legislative measures.

In 2018, it was discovered that British firm Cambridge Analytica had farmed the data of up to 87 million Facebook users to create psychological profiles of voters and hypertarget them with advertising messages in the lead up to the 2016 US election. For the first time, this highlighted the degree to which the general public was vulnerable to manipulative messaging. We’ve long since woken up to the fact that information seen online may not be factual or productive. But, what if the same messages were shared by someone you trust? Like your grandmother, a respected community member, or well-liked celebrity? Can you trust your eyes and ears?

Discourse around elections is already filled with distrust. Whether it’s accounts of election fraud and “stolen” elections, or general disinformation that proliferates on social media. Distrust is at such an alltime high, that a former US President can have his ear grazed by the bullet from an attempted assassination on live TV, and many people won’t even believe their own eyes — instead parroting themes of an “inside job” or asserting that the entire scenario was staged.

In this era of distrust, deepfake technology isn’t just another trust-destroying weapon for psychological manipulation — it’s a nuclear warhead and everyone is armed.

Deep Learning + Fake = Deepfakes

Deepfakes, a portmanteau of AI "deep learning" processes and "fake", involve the use of artificial intelligence to create highly convincing videos, audio, and images imitating and misrepresenting real people. Sophisticated AI models are trained on real video and audio of a person — deep learning — and based on this training they are able to generate new content where people appear to say or do things they never actually did. The implications for information integrity are staggering, as deepfakes can be employed to manipulate public opinion and erode trust in authentic media. Experts warn that the proliferation of deepfakes could result in a "liar's dividend", where even genuine evidence of wrongdoing can be dismissed as fake. This phenomenon threatens the ability of citizens to make informed choices, protect human rights, and maintain the efficacy of the judiciary.

The Threat of Deepfakes to Democracy

The advent of deepfake technology poses a significant threat to democratic processes worldwide. This synthetic content is sophisticated and easily accessible — in fact it has become a fun pastime for content creators who make entertaining videos featuring celebrities and public figures of all sorts. However, this technological marvel has moved far beyond novelty and has been transformed into a tool for widespread disinformation. Now, the potential for deepfakes to undermine democracy is profound, affecting elections, eroding trust in institutions, and exacerbating social and political divides.

Deepfake Impact on Elections

In the great “emails” controversy of 2016, it was simply the timing of an FBI investigation being announced just days before the US election that Hillary Clinton called the “determining factor” in her loss that year. This incident highlighted the precariousness of public perception and reminded us of the viral nature of bad news in a hyperconnected world — a virality that deepfakes can be engineered to exploit.

Deepfakes have already had a massive impact on elections. Incidents like the New Hampshire robocall, where an AI-generated voice impersonating President Biden dissuaded voters from participating in the Democratic Primary, illustrate the potential for disruption to democratic processes. In Slovakia, an AI-generated audio clip impersonated political candidate, Michal Šimečka of Progressive Slovakia, seemingly boasting about rigging the 2023 election. Needless to say, the audio went viral and his party was defeated in the election that year.

Incidents such as these can sway voter opinions and ultimately influence election outcomes. The speed at which deepfakes can be created and disseminated complicates efforts to debunk false information, especially during critical pre-election periods.

Deepfakes also pose a unique threat to local or lesser-known political races, where there is less scrutiny and far fewer resources to combat disinformation. Bad actors can amplify the effect of deceptive campaign ads, making these threats more convincing and easier to produce at scale. Your local municipal government — mostly busy with land rezoning, sewer systems and public transportation — is ill-equipped to counter a targeted deepfake campaign.

The Race to Detect and Mitigate Deepfakes

Addressing the challenge of deepfakes is multi-faceted and not easy. AI-powered detection tools are being developed to identify manipulated media in real-time. AI algorithms, facial landmark analysis, temporal consistency checks and multimodal detection are just some of the buzzwords in development to counter the threat. In the USA, legislative measures are also being pursued, with numerous bills introduced to regulate the use of AI-generated content in elections. These laws aim to enhance transparency and penalize those who intentionally mislead voters.

Election administrators are conducting exercises to prepare for scenarios involving deepfakes, while public awareness campaigns educate voters about the threat of manipulated media. Major firms, including Microsoft, Amazon and Google, have established the AI Elections Accord to combat deceptive AI use. These companies are implementing rapid response mechanisms to address concerns about deepfakes and are committed to sharing information about emerging threats.

Despite these efforts, current deepfake detection tools are severely limited. Detection algorithms struggle to understand new, unseen samples that they haven’t been trained on. They are also vulnerable to adversarial attacks that can poison their training data. Real-time detection requires significant computational power and low latency, posing practical challenges for widespread implementation. Meanwhile, the rapid evolution of deepfake technology means that detection tools will lag far behind new methods, creating a scenario where the impact of deepfakes will continue reverberating long after meaningful countermeasures have been deployed.

Erosion of Trust in News and Institutions

The cost of verifying the authenticity of information has become prohibitively high, leading people to doubt even genuine content. This uncertainty has eroded trust in media sources and undermines the credibility of visual and audio evidence, which has traditionally been a cornerstone of journalistic integrity. We have seen this in even the most visceral examples such as the assassination attempt of President Trump where a vocal minority believes that the entire incident was somehow faked.

This selective trust diminishes the role of news institutions as impartial arbiters of truth. When people cannot trust the information they receive, their confidence in political institutions and processes declines, leading to lower civic engagement and increased cynicism.

The secondary effects of deepfakes are apparent in the generalized distrust we see in anything to do with government or mainstream news media. Knowledge of the proliferation of deepfake technology now means that deepfakes don’t even need to be employed for apathy and uncertainty to undermine even the most factual information.

Countering Deepfakes with Deep Collaboration

There’s no easy answer on what to do about deepfakes. Addressing the threat of psychological manipulation requires a comprehensive approach that includes technological advancements, regulatory measures, and public education. Continuous adaptation and collaboration among tech companies, government agencies, and the public are crucial to effectively combat this threat.

Widespread public and private commitment is required to ensure that our elections don’t become further compromised by a technological gerrymandering of reality through deepfakes and disinformation. This is hindered by the fact that the tools of informing the public are the very ones undermined by this phenomenon.

Crowdsourced knowledge has provided some amount of protection against disinformation on platforms like X. Increasingly, we see “Community Notes” added to content that is widely shared due to its entertaining presentation of information or bold claims. These community-sourced contributions provide helpful context, and in some instances fully debunk, the misleading or manipulative information seen in these posts.

As we navigate the evolving landscape of digital information, vigilance and proactive measures are essential to safeguard the integrity of our elections and maintain trust in democratic institutions. 

As Seen on The Tech Factor. View the Original article here.

Stay in the Know

Sign up to receive very occasional emails on interesting topics.
Please add an email address
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
All Articles

AI is Reshaping the Future of Work. Again.

Cesar TorrecillaCesar Torrecilla
Cesar Torrecilla
Cesar Torrecilla
Skyrocket Digital
Mo Dhaliwal
September 24, 2024
4

The workforce is undergoing a major transformation. AI is at the forefront of this change. Discover how to adapt your business and skills to thrive in the new era of work.

Power of Consistency in Design Systems

Jorge CortezJorge Cortez
Jorge Cortez
Jorge Cortez
Skyrocket Digital
No items found.
August 14, 2024
11

Master design consistency with a powerful design system. Discover how a robust design system can streamline your development process. Learn how integrating style guides boosts efficiency, improves collaboration with designers, and ensures your projects are scalable and consistent from start to finish.

Why AI Alone Can’t Make You a Billionaire. At Least, Not Yet.

Mo DhaliwalMo Dhaliwal
Mo Dhaliwal
Mo Dhaliwal
Skyrocket Digital
No items found.
July 25, 2024
5

AI can significantly boost business efficiency, but it's not a magic wand for billionaire status. While AI is powerful, it still lacks human creativity and judgment. Successful entrepreneurs will combine AI tools with human expertise for maximum impact.

Book a Meeting

Or

Send a Message

Please add your name.
Please enter a valid email address.
Please add a message.
Send
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.