Over half the global population went to the polls in 2024. Headlines warned of deepfakes, synthetic media campaigns, and AI-fuelled misinformation that could sway voters.
Public concern about AI's influence on elections has reached remarkable levels worldwide. 67% of Europeans expressed concern about AI's potential to manipulate election outcomes, while 78% of Americans believed AI posed a threat to the 2024 presidential election.
There were documented cases of AI-enabled election interference: deepfake audio of UK political figures like London Mayor Sadiq Khan and Labour leader Keir Starmer; a fake robocall using President Biden's AI-generated voice to discourage New Hampshire voters from participating; an AI-generated video showing Vice President Harris making crude remarks about the Trump assassination attempt; and in India, AI-generated videos of deceased political figures making statements. But none of these had the Welles War of the Worlds effect that many had foreseen.
Major tech companies implemented diverse strategies to mitigate AI risks during elections:
- Google banned election-related conversations entirely in its chatbot.
- Anthropic adopted a "Swiss cheese model" with layered interventions, explicitly prohibiting misuse in election contexts and redirecting users to authoritative sources like TurboVote.
- Meta created an Elections Operations Centre for South Africa, collaborating extensively with fact-checking organisations.
- Perplexity launched an Election Information Hub partnering with The Associated Press for real-time results and voter education.
A combination of proactive tech company policies, research by organisations like The Alan Turing Institute, high public vigilance, and the technical limitations of generative AI tools collectively minimised disruptions. But how long will this 'perfect recipe' last?
Research by the Alan Turing Institute identified the "liar's dividend" as a significant threat in which legitimate information becomes suspect.
Effective approaches included:
- Anthropic's explicit policy updates and layered safeguards
- Perplexity's authoritative Election Information Hub
- Regulatory guidelines suggested by The Alan Turing Institute
- Increased media literacy and educational initiatives
As Sunny Gandhi from Encode Justice warned, threats will evolve, underscoring the need for ongoing vigilance balanced by leveraging AI's potential to enhance democratic participation.
And although we live in a world of data, a future that promises self-driving cars, universal basic income, and service robots, we also live in a world of sentiment. What people read, hear, and believe—regardless of how rational or true any of it is—affects stock prices globally and influences voting behaviour. The fact that two-thirds to three-quarters of voters are losing faith in the democratic process due to concerns about AI tampering is arguably just as damaging as if AI had actually influenced any elections. Perception shapes reality, and the erosion of trust in our institutions can be as harmful as the threats themselves.