At the Munich Security Conference (MSC) on February 16, 2024, leading technology companies pledged to work together to prevent deceptive AI content from interfering with this year's global elections. With more than four billion people in over 40 countries set to vote, the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" represents a significant step towards safeguarding online communities and protecting the integrity of democratic processes.
The accord outlines eight specific commitments that participating companies, including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, have agreed to:
1. Developing and implementing technology to mitigate risks related to deceptive AI election content
2. Assessing models to understand the risks they may present
3. Detecting the distribution of deceptive AI content on their platforms
4. Appropriately addressing detected deceptive AI content
5. Fostering cross-industry resilience to deceptive AI election content
6. Providing transparency to the public regarding their efforts
7. Engaging with diverse global civil society organisations and academics
8. Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
These commitments apply where they are relevant for the services each company provides, and the accord aims to address AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders, or provide false information to voters about the voting process.
Ambassador Christopher Heusgen, Munich Security Conference Chairman, emphasised the importance of the Tech Accord, stating: "Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices."