In a recent open letter titled "Disrupting the Deepfake Supply Chain," a group of AI experts and industry executives, including renowned AI trailblazer Yoshua Bengio, have called for increased regulation surrounding the creation of deepfakes, citing the growing threat they pose to society.
Deepfakes have become increasingly indistinguishable from human-created content. The open letter highlights the potential risks associated with deepfakes, such as their use in sexual imagery, fraud, and political disinformation. As AI continues to progress rapidly, making deepfakes easier to create, the signatories emphasise the urgent need for safeguards to prevent their misuse.
The letter, put together by Andrew Critch, an AI researcher at UC Berkeley, makes several recommendations on how to regulate deepfakes. These include the full criminalisation of deepfake child pornography, criminal penalties for individuals knowingly creating or facilitating the spread of harmful deepfakes, and requiring AI companies to prevent their products from being used to create harmful deepfakes. The signatories include researchers from Google, DeepMind, and OpenAI, as well as prominent figures like Harvard psychology professor Steven Pinker and two former Estonian presidents.
However, the idea of a global consensus on these matters remains a theoretical construct that is unlikely to materialise in the near future. The potential consequences of unchecked AI and deepfake technology are alarming, as they can erode trust in all forms of media, leading to chaos and instability.
The fear is that deepfakes could be used to start wars, topple governments, manipulate stock markets, create artificial scarcity of essential goods, and fuel general unrest and anarchy.In a world where people cannot trust anything they read, hear, or see, the stage is set for a breakdown of social cohesion and rational discourse.
Despite the urgent need for regulation, holding AI developers liable for the misuse of their products remains a contentious issue. Just as car manufacturers are not held legally responsible for the misuse of their vehicles, AI developers cannot be expected to bear the burden of responsibility for the misactions of end-users. The fear of liability could lead to a chilling effect on research and development, as companies and individuals become reluctant to explore new frontiers in AI.
As the debate surrounding the regulation of deepfakes continues, it is crucial for governments, industry leaders, and AI experts to collaborate and find a balance between fostering innovation and mitigating potential risks. The open letter serves as a wake-up call, highlighting the need for proactive measures to prevent the misuse of AI technology and ensure that its benefits are harnessed for the greater good of society.