"Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed," the group said in the letter, which was organized by Andrew Critch, an AI researcher at UC Berkeley.
Deepfakes, which are realistic yet fabricated images, audios, and videos created by AI algorithms, have become increasingly indistinguishable from human-created content due to recent technological advancements.
The letter, titled "Disrupting the Deepfake Supply Chain," proposes recommendations for regulating deepfakes, including fully criminalizing deepfake child pornography, imposing criminal penalties on any individual who knowingly creates or facilitates the spread of harmful deepfakes, and requiring AI companies to take steps to prevent their products from generating harmful deepfakes.
As of Wednesday morning, over 400 individuals from various sectors, including academia, entertainment, and politics, have signed the letter.
Signatories include Steven Pinker, a Harvard psychology professor; Joy Buolamwini, founder of the Algorithmic Justice League; two former Estonian presidents; researchers at Google DeepMind; and a researcher from OpenAI.
Ensuring that AI systems do not harm society has been a priority for regulators since Microsoft-backed OpenAI unveiled ChatGPT in late 2022, which impressed users with its human-like conversation capabilities and other tasks.
There have been several warnings from prominent individuals about the risks of AI. Notably, Elon Musk signed a letter last year calling for a six-month pause in the development of systems more powerful than OpenAI's GPT-4 AI model.