OpenAI has taken steps to address the misuse of their AI models in covert influence operations (IO) that aim to manipulate public opinion and influence political outcomes.

In the last three months, OpenAI announced it has successfully disrupted five covert influence operations that attempted to leverage their AI models for deceptive purposes across the internet. These operations, originating from Russia, China, Iran, and Israel, sought to use OpenAI's models for generating comments, articles, and social media posts in multiple languages.

 OpenAI reports that, as of May 2024, these campaigns have not significantly increased their audience engagement or reach as a result of using their services. 

The investigations conducted by OpenAI have revealed several trends in how covert influence operations have recently used AI models:

1. Content generation: Threat actors used AI to generate text and images in greater volumes and with fewer language errors than human operators alone.

2. Mixing old and new: AI-generated material was used alongside more traditional formats, such as manually written texts or memes.

3. Faking engagement: Some networks used AI to create the appearance of engagement across social media by generating replies to their own posts.

4. Productivity gains: Many threat actors used AI to enhance productivity, such as summarising social media posts or debugging code.

While OpenAI's efforts have been successful in disrupting covert influence operations, the organisation acknowledges that detecting and mitigating multi-platform abuses can be challenging, as they do not always know how content generated by their products is distributed. However, they stress that the company remains dedicated to finding and mitigating this abuse at scale.

Share this post
The link has been copied!