Generative AI

Three Ways AI Can Hack the U.S. Election

The growing capability of AI content poses three very real threats to modern elections. We explain each, and take a glimpse at a possible solution to the growing AIpocalypse.
October 27, 2024
10 min. read

Introduction

In 2020, we covered Three Ways to Hack the U.S. Election. That article is every bit as relevant today as it was then. Four years ago, we focused on the ways in which disinformation could be used to misinform and divide the nation; the security community was still reacting to the 2018 official finding that Russian and Iranian intelligence operatives were found to be hacking political email systems to slant the news and sway voters. In 2024, the digital landscape has shifted again. The rapid proliferation of generative artificial intelligence (AI) and deepfakes pose an even more insidious threat. Today’s disinformation campaigns are no longer limited to shadowy state-sponsored actors—anyone with access to the right tools can now create convincing fake content, flooding the internet with false narratives. We explore the current and future threats to democratic voting systems posed by generative AI and why protocols like the Coalition for Content Provenance and Authenticity (C2PA) may not be enough to protect voters.

Three Ways AI Can Hack Voters

From fake videos and voice cloned audio to the rapid and widespread dissemination of fake news stories, we put forward the three most likely ways in which AI can and will be used to influence elections all over the world.

Disinformation and Deepfakes

To date, discussions of election security have primarily focused on the technical and societal risks of electronic voting machines. In 2024, however, concerns of election interference should be focused on a much more fundamental issue: that of trust.

In recent years, disinformation has become a geopolitical weapon, and today it is easier than ever to create convincing fake content. Generative AI and other forms of machine learning are expanding their capabilities at an alarming rate. Early AI videos were comically bad (we only need look at the April 2023 video of Will Smith to be reminded of that).1 Less than a year after that spaghetti fuelled nightmare, OpenAI previewed ‘Sora’, their new tool for creating realistic and believable looking videos.2 Initial results from Sora had some flaws but could be believed to be real, at least to the casual observer. Today, AI manipulators can take real footage and effortlessly change entire aspects of video, for example by changing backgrounds or by perfectly modifying facial expressions to lip sync a person’s mouth to deepfake audio. Updated 2024 versions of Will Smith eating spaghetti are shockingly realistic. Well, some are. There is still plenty of nightmare fuel out there.

With the proliferation of AI tools capable of generating deepfake videos, realistic images, and text that mimics legitimate news articles, the boundaries between real and fake have blurred. The content that gen AI creates, however, is not the first of its kind. Manipulated and fake images have been around at least since the dawn of photography with Hippolyte Bayard’s “Self Portrait as a Drowned Man”, and the Cottingley Fairies fooled even Arthur Conan Doyle in 1917.34 Of course, image manipulation software such as Adobe Photoshop have made these efforts easier. However, these tools require expert artists, expensive licenses, and hours (if not days) of time to create believable simulacrums. Generative AI tools almost entirely remove all of these barriers, allowing anyone, at no cost, to create realistic content within minutes.

During the 2020 U.S. election period, nation-states engaged in cyber deception to divide, demoralize, distract, and discredit. They accomplished this using teams of operatives and automated bots, flooding social media with hand crafted faked content. Now, anyone with a computer and internet access can create realistic fake content that can go viral within hours.

As of today, generative AI is easily able to:

  • Create written content such as news articles, scientific looking reports, social media posts, etc.
  • Create photorealistic pictures which can look genuine
  • Synthesize speech and sound effects, facilitating deepfake voices of real people
  • Create basic (though still impressive) video, capable of presenting realistic people delivering authentic sounding dialogue
  • Manipulate real video to replace or change elements

This is not a sensationalized prediction of things to come. Generative AI has already been weaponized. False claims of celebrity deaths have propagated across platforms often created by AI-powered news generators. NewsGuard’s AI Tracking Center claims that the number of AI-generated news sites has gone from zero in early 2023, to over 1,100 by October 2024.5

In October 2024 the number of AI-generated news sites has risen to over 1,100

During this election season, we have already seen AI-generated images and videos of Donald Trump and Kamala Harris. While some have clearly been created for comedic effect, many are far more worrying. Microsoft’s Election Day 2024 report, published October 23rd 2024, cites an example of a deepfake video in which Vice President Kamala Harris appears to make crass remarks on the assassination attempt against Donald Trump.1 Microsoft’s report confirms that Russian influence actors created the deepfake video, which received tens of thousands of views on X/Twitter after it was shared with an RT correspondent.

Figure 1. A redacted still taken from the deepfake video of Vice President Kamala Harris created by Russian actors. Source: Microsoft’s Election Day 2024 report

Figure 1. A redacted still taken from the deepfake video of Vice President Kamala Harris created by Russian actors. Source: Microsoft’s Election Day 2024 report

With its high contrast and odd lighting, the image is clearly identifiable as fake to many. Yet to others, the picture is realistic enough to be entirely believable. This AI-generated content isn’t only a concern from foreign entities such as Russia or China, however. The Guardian features examples of AI generated images being amplified by US political groups on social media platforms, including Facebook.12 In addition, the simple potential of using AI to generate fake images was recently used to call into question images of a Kamala Harris rally, which forensic image experts agree are genuine.3 This perfectly illustrates the tactics attributed to Russian disinformation campaigns, now being used within the US political process rather than by a foreign nation. It no longer matters if AI images are good enough to be believed; we have reached a point where faked images are good enough that all images can now be called into question.

Voter Suppression

Beyond creating fake content for disinformation, deepfakes can have other nefarious purposes.

In January 2024 Steve Kramer, a political consultant, admitted to orchestrating a robocall that used deepfake technology to mimic President Joe Biden's voice.4 The faked call, which reached thousands of New Hampshire voters, discouraged them from participating in the state's presidential primary. Although Kramer claims that his actions were a deliberate way to issue a wakeup call regarding the risks of generative AI, this does not seem to have been made clear in the robocall. Excerpts from the call, all spoken in the voice of an authentic sounding (if slightly imperfect) voice of President Joe Biden, include “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again” and “your vote makes a difference in November, not this Tuesday”. The robocalls also employed caller ID spoofing to disguise their origins.

The faked call, which reached thousands of New Hampshire voters, discouraged them from participating in the state's presidential primary.

Biden was not campaigning in New Hampshire, and voting in the primaries does not preclude voters from casting a ballot in November’s general election.

Kramer estimates he spent about $500 to generate $5 million worth of media coverage. In September 2024, the FCC finalized a $6 million fine against him for orchestrating illegal robocalls.1

Dissemination and Widening the Divide

Bots and automation play a significant role in the spread of disinformation on social media platforms like X/Twitter, where they are used to amplify false narratives, manipulate public opinion, and create an illusion of widespread consensus on controversial topics. These automated accounts can rapidly share misleading content, interact with genuine users, and engage in coordinated efforts to boost the visibility of particular posts, making it difficult for users to differentiate between organic engagement and orchestrated campaigns. Bots are often programmed to retweet specific hashtags, engage with inflammatory content, and follow real users to increase their credibility, all while pushing disinformation to the top of trending topics.

A striking example of how bots manipulate conversations occurred during the height of the COVID-19 pandemic. Bots were responsible for a large portion of the misinformation regarding vaccines and public health measures, artificially inflating the visibility of conspiracy theories and false information about the virus.2

Dan Woods, a former CIA and FBI cybersecurity expert and previous Head of Intelligence at F5, has previously discussed the severe impact of bots on X/Twitter in conversations with Elon Musk. Woods estimated that up to 80% of Twitter's active accounts could be bots, further underscoring the extent to which automation can distort online discourse.3 This issue was highlighted in Musk’s bid to acquire Twitter, where the billionaire raised concerns about the number of bot accounts, potentially inflating the platform’s user metrics and affecting ad revenue. Woods’ insights highlight the pervasive influence of automated disinformation networks, making the detection and removal of these bots a critical challenge for social media platforms.

AI offers the ability to significantly enhance the capabilities of social media bots by enabling them to create fully-realized, convincing personas. These bots can be equipped with intricate backstories, complete with fabricated personal details such as employment history, hobbies, and even educational backgrounds, making them indistinguishable from real users. AI-generated bots can interact with both real and other fake users in ways that create an illusion of authenticity through connections, mutual likes, comments, and follower networks. By simulating these interactions, AI-enhanced bots can gain credibility and trust within online communities, embedding themselves into social groups over time. OpenAI has recently been working to make their products more persuasive and emotionally engaging, although this work isn’t necessarily intended to be applied to social media.45

In addition, bots can use large language models (LLMs) to craft highly realistic posts on a wide range of topics, from personal updates to nuanced political opinions. These posts can be used to subtly promote disinformation, stir controversy, or reinforce certain narratives without appearing out of place. For example, a bot embedded in a political discussion group could post detailed opinions about policy changes, referencing relevant articles or statistics to appear knowledgeable. The bot could also engage with real users by commenting on their posts, offering opinions, or sharing content that matches their interests, all while blending seamlessly into the social fabric. This level of sophistication makes AI-enhanced bots powerful tools for influencing conversations, shaping public opinion, and spreading disinformation across social media platforms.

Future AI

TV news has long been considered the last bastion of trustworthiness because it unfolds live, in real-time, where events are broadcast as they happen. To many of us, this would seem next to impossible to fake. Unlike pre-recorded or edited media, live broadcasts have an instinctual sense of authenticity. While deepfakes and AI-generated videos can be created rapidly, the content they deliver is still limited by the need for pre-scripted material, and the requirement to generate the content – even if this only takes mere minutes.

However, with advancements in AI, it's becoming conceivable that in the near future AI could generate video content, including news broadcasts, in real-time. This could lead to the unsettling possibility of entire "news" shows featuring AI-generated anchors who can deliver, interact with, and even react to real-world events in real-time, blurring the line between authentic and synthetic information on a level never before seen.

Emotionally intelligent AI is a rapidly developing field in which LLMs are able to understand and react to human feedback and emotions. When used by fake news or social media bots, this has the potential to be a highly manipulative tool for deepening social divides. These AI-driven bots can analyze and understand the emotional tone of real people's posts, detecting anger, fear, frustration, or bias in real-time. By recognizing these emotional cues, the AI can craft responses that are perfectly tailored to amplify those feelings, further inflaming tensions and reinforcing echo chambers. This ability to exploit emotional vulnerabilities allows disinformation campaigns to manipulate individuals on a personal level, fueling polarization, and making divisive issues even more contentious.

Combating Fake and AI-generated Content

The Coalition for Content Provenance and Authenticity (C2PA) protocol is a standards-based initiative designed to address the growing problem of disinformation and fabricated media, particularly in the era of AI-generated content like deepfakes. Jointly developed by organizations like Adobe, Microsoft, Intel, and the BBC, it provides a framework for attaching verifiable metadata to digital media files, allowing creators to disclose key information about the origin and editing history of an image, video, or document. By embedding metadata directly into the media file, C2PA aims to ensure that audiences can verify whether a piece of content is genuine or has been altered, offering an accessible, standardized way to track digital content’s authenticity across the internet.

Figure 2. Example video with embedded C2PA digital watermark. Source: c2pa.org

Figure 2. Example video with embedded C2PA digital watermark. Source: c2pa.org

The core idea behind C2PA is to establish a provenance chain for media, where each step in the creation, editing, and distribution process is recorded and attributed to a trusted source. The protocol uses cryptographic signatures to ensure that any tampering with the metadata would be detectable. For instance, a photo captured by a camera supporting C2PA could include details such as the time and location of the shot, the camera’s identity, and any subsequent edits made in software like Photoshop. The embedded metadata follows the content across platforms, allowing viewers to access this information by using tools or platforms that support the protocol, making it much easier to spot manipulated or AI-generated content.

This approach is particularly crucial in combating AI-generated fake content, such as deepfakes, which can convincingly imitate real people and events. C2PA's cryptographic signatures can help authenticate genuine content by comparing it against the unaltered original. This system helps users identify when an AI tool has been used to create or manipulate content, offering transparency and potentially curbing the rapid spread of disinformation. By tracing content back to its source, C2PA aims to provide both publishers and consumers with reliable tools to judge the trustworthiness of digital media.

The vision behind C2PA is promising. In theory, this framework perfectly addresses claims of forgery or AI-generation, as content providers can provide proof of the provenance and veracity of an image or video file. In a subsequent article, we will explain C2PA in detail and the many potential ways it could fail to live up to the promise.

Conclusion

As the 2024 election cycle draws to its conclusion, the dangers of disinformation and generative AI are more pressing than ever. While technical solutions like C2PA provide a valuable layer of protection, they are far from perfect. We must be aware of the protocol’s limitations and recognize that, without widespread adoption, public education, and a healthy dose of skepticism, the battle against digital manipulation is far from over.

Many will argue for broader public awareness of the risks and better education, but in such a rapidly shifting world when the tech skills gap is widening, it is becoming increasingly unrealistic to expect everyday users to become sufficiently literate in complex technical mechanisms.

Authors & Contributors
David Warburton (Author)
Director, F5 Labs
Footnotes

1https://www.youtube.com/watch?v=XQr4Xklqzw8

2https://openai.com/index/sora/

3https://about.jstor.org/blog/fake-news-the-drowning-of-hippolyte-bayard/

4https://en.wikipedia.org/wiki/Cottingley_Fairies

5https://www.newsguardtech.com/special-reports/ai-tracking-center/

6https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/MTAC-Election-Report-5.pdf

7https://www.theguardian.com/us-news/2024/sep/15/ai-harris-trump-debate

8https://www.facebook.com/photo.php?fbid=1083638339788652

9https://www.npr.org/2024/08/14/nx-s1-5072687/trump-harris-walz-election-rally-ai-fakes

10https://apnews.com/article/ai-robocall-biden-new-hampshire-primary-2024-f94aa2d7f835ccc3cc254a90cd481a99

11https://www.reuters.com/world/us/fcc-finalizes-6-million-fine-over-ai-generated-biden-robocalls-2024-09-26/

12https://www.infectioncontroltoday.com/view/covid-19-masking-hundreds-thousands-russian-social-media-bots-have-tricked-public

13https://www.f5.com/company/blog/bot-traffic-percentage-fake-accounts-expert

14https://www.wired.com/story/thrive-ai-openai-artificial-intelligence-persuasion/

15https://www.wired.com/story/openai-gpt-4o-model-gives-chatgpt-a-snappy-flirty-upgrade/

Read More from F5 Labs

2024 DDoS Attack Trends
2024 DDoS Attack Trends
07/16/2024 report 30 min. read
Continued Intense Scanning From One IP in Lithuania
Continued Intense Scanning From One IP in Lithuania
10/21/2024 article 5 min. read
Three Ways AI Can Hack the U.S. Election
Three Ways AI Can Hack the U.S. Election
10/27/2024 article 10 min. read