As the lines blur between human-created and synthetic content, Meta is taking steps to help users identify AI-generated images on its platforms. Nick Clegg, President of Global Affairs at Meta, outlined the company's approach to labelling AI-generated images and its ongoing efforts to develop industry standards for identifying AI content across various media types.

In the coming months, Meta will begin labelling images that users post to Facebook, Instagram, and Threads when the company can detect industry-standard indicators that the content is AI-generated. This initiative builds upon Meta's existing practice of labelling photorealistic images created using its own Meta AI image generator with the "Imagined with AI" tag.

Meta has been collaborating with industry partners through forums like the Partnership on AI (PAI) to develop common technical standards. These standards include invisible watermarks and metadata embedded within image files, which align with PAI's best practices. Meta is currently building tools that can identify these invisible markers at scale, enabling the company to label images generated by various AI tools from companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

While companies are starting to include signals in their image generators, the same level of implementation has not yet been achieved for AI tools that generate audio and video content. To address this gap, Meta is adding a feature that allows users to disclose when they share AI-generated video or audio, enabling the platform to add a label to the content. However, Meta acknowledges that it is not yet possible to identify all AI-generated content, and there are ways for people to remove invisible markers. To combat these challenges, the company is developing classifiers that can automatically detect AI-generated content, even in the absence of invisible markers, and exploring ways to make it more difficult to alter or remove watermarks.

Meta emphasises that its Community Standards apply to all content posted on its platforms, regardless of how it is created. The company has been using AI systems to enforce policies for several years. Meta is optimistic that generative AI could further enhance its ability to take down harmful content more quickly and accurately, particularly during moments of heightened risk, such as elections. The company is testing Large Language Models (LLMs) trained on its Community Standards to help determine policy violations and remove content from review queues when there is high confidence that it does not violate the rules.

Share this post
The link has been copied!