Emerging trends in criminalising (non-consensual) sexual deepfakes: challenges and perspectives from England and Wales, the US and the EU
auteur | Clementina Salvi |
tijdschrift | RIDP (ISSN: 0223-5404) |
jaargang | 2024 |
aflevering | Researching the boundaries of sexual integrity, gender violence and image-based abuse |
onderdeel | Image-based and online sexual abuse |
publicatie datum | 20 september 2024 |
taal | English |
pagina | 391 |
samenvatting | Artificial Intelligence (AI), particularly deepfake technology, is transforming online pornography, leading to new image-based abuse forms and significant regulatory challenges. The harms can be as severe as those from abuse involving real images, disproportionately affecting women. As has happened in the past with other forms of non-consensual intimate image distribution, now, new criminal law responses are being called for to deter related harms. This paper focuses on emerging trends in criminalising (non-consensual) sexual deepfakes. It starts with an overview of AI technologies, their use to produce non-consensual sexually explicit content, and the phenomenon’s origin. After describing the key role technology companies play in creation and detection, it moves to criminal law regulation. Despite existing laws aimed at the non-consensual distribution of sexual images, most focus on the non-consensual “sharing” of “real” images and do not cover the “creation” of explicit deepfakes. Now, new or amended offences in England and Wales, the United States, and at the European Union level specifically address the “deepfake” dimension. These laws are analysed to exemplify the complexities in regulating rapidly evolving technology. Criminal law is an important tool against culpable sexual image-based abuse but struggles to keep pace with technological advancements and the exponential growth of online abuse. This paper highlights the need to avoid merely symbolic and inconsistent approaches and instead adopt a deeper, more focused strategy. It concludes by suggesting an effective approach might involve shifting focus from solely individual criminal liability to efficiently regulating AI developers during the creation phase and hosting platforms during the distribution phase. |