In response to feedback from the Oversight Board and an extensive policy review process, Meta, has announced changes to its approach to handling manipulated media on its platforms. Monika Bickert, Vice President of Content Policy at Meta, outlined the company's new strategy, which focuses on providing transparency and context through labels, rather than removing content.
Meta acknowledges that its existing approach to manipulated media, which primarily focused on AI-generated videos that depict people saying things they didn't say, has become too narrow given the rapid advancements in AI technology. The company will now begin labelling a wider range of video, audio, and image content as "Made with AI" when it detects industry-standard AI image indicators or when users disclose that they're uploading AI-generated content. This expansion aligns with the Oversight Board's recommendation to address manipulation that shows a person doing something they didn't do, in addition to saying something they didn't say.
The Oversight Board argued that removing manipulated media that doesn't violate other Community Standards could unnecessarily restrict freedom of expression. In response, Meta agreed that providing transparency and additional context is now the better approach. The company will keep the content on its platforms, adding informational labels and context to help users better assess the content and understand its origins. In cases where digitally-created or altered content poses a particularly high risk of materially deceiving the public on a matter of importance, Meta may add a more prominent label to provide further information and context.
Meta's decision to update its approach to manipulated media is the result of a policy review process that included consultations with over 120 stakeholders in 34 countries, as well as public opinion research involving more than 23,000 respondents in 13 countries. The majority of stakeholders agreed that removal should be limited to only the highest-risk scenarios where content can be tied to harm, as generative AI becomes a mainstream tool for creative expression. Public opinion research also revealed strong support for warning labels on AI-generated content that depicts people saying things they did not say.
Meta plans to start labelling organic AI-generated content in May 2024 and will stop removing content solely on the basis of its manipulated video policy in July. This timeline allows users to familiarise themselves with the self-disclosure process before the company stops removing the smaller subset of manipulated media.