Meta has joined forces with Thorn, All Tech is Human, and other leading tech companies to proactively address the potential misuse of these tools in perpetrating child exploitation. By committing to a set of Safety by Design principles developed by Thorn and All Tech is Human, Meta aims to mitigate the risks associated with generative AI while fostering innovation and ensuring the safety of its users, particularly children.
The Safety by Design principles, to which Meta and its industry partners have committed, are divided into three main categories:
1. Develop:
- Responsibly source training datasets and safeguard them from child sexual abuse material (CSAM) and child sexual exploitation material (CSEM).
- Incorporate feedback loops and iterative stress-testing strategies in the development process.
- Employ content provenance with adversarial misuse in mind.
2. Deploy:
- Safeguard generative AI products and services from abusive content and conduct.
- Responsibly host models by assessing them for potential to generate AIG-CSAM and CSEM and implementing mitigations before hosting.
- Encourage developer ownership in safety by design by providing information about models, including steps taken to avoid downstream misuse.
3. Maintain:
- Prevent services from scaling access to harmful tools by removing models and services that produce AIG-CSAM from platforms and search results.
- Invest in research and future technology solutions to address the use of generative AI for online child sexual abuse and exploitation.
- Fight CSAM, AIG-CSAM, and CSEM on platforms by detecting and removing child safety violative content and combating fraudulent uses of generative AI to sexually harm children.