United States lawmakers are calling for swift legislative action to criminalize the creation and dissemination of deepfake images, particularly in light of the recent circulation of explicit fake photos of Taylor Swift on various social media platforms, including X and Telegram.
Preventing Deepfakes of Intimate Images Act
In response to the incident, U.S. Representative Joe Morelle expressed strong disapproval and emphasized the importance of passing the Preventing Deepfakes of Intimate Images Act. This legislation aims to make non-consensual deepfakes a federal crime, and Morelle urged urgent action to address this issue.
The Menace of Deepfakes
Deepfakes involve the use of artificial intelligence (AI) to create manipulated videos by altering an individual's face or body. Currently, there are no federal laws specifically addressing the sharing or creation of deepfake images, but some lawmakers are taking steps to rectify this gap in legislation.
Representative Yvette Clarke's Perspective
Representative Yvette Clarke noted that the situation involving Taylor Swift is not an isolated incident. She highlighted that women have been victims of deepfake technology for years, and the proliferation of deepfakes has become more accessible and affordable with advancements in AI.
Platform Actions and Monitoring
In response to the incident, X (formerly Twitter) has actively removed the images and taken appropriate actions against the accounts responsible for spreading them. The platform continues to closely monitor the situation to promptly address any further violations and ensure content removal.
Global Concerns about Deepfakes
The United Kingdom has already made the sharing of deepfake pornography illegal under its Online Safety Act. A significant portion of deepfakes posted online, as revealed in the State of Deepfakes report from 2023, involves pornography, with approximately 99% of targeted individuals being women.
Growing Worries about AI-Generated Content
Concerns regarding AI-generated content have been on the rise, with the World Economic Forum highlighting the potential adverse consequences of AI technologies in its 19th Global Risks Report. The report addresses the intended or unintended negative impacts of advances in AI and related technological capabilities, including generative AI and deepfakes.
International Awareness and Action
Canada's primary national intelligence agency, the Canadian Security Intelligence Service, has also expressed concern about disinformation campaigns utilizing AI-generated deepfakes on the internet.
The United Nations has highlighted AI-generated media as a significant and urgent threat to information integrity, particularly on social media. The U.N. emphasized that the risk of online disinformation has increased due to rapid technological advancements, specifically in generative artificial intelligence and deepfakes.