Generative AI and deepfakes: a human rights approach to tackling harmful content
View/ Open
Author
Romero-Moreno, Felipe
Attention
2299/27710
Abstract
The EU's Artificial Intelligence Act (AIA) introduces necessary deepfake regulations. However, these could infringe on the rights of AI providers and deployers or users, potentially conflicting with privacy and free expression under Articles 8 and 10 of the European Convention on Human Rights, and the General Data Protection Regulation (EU) 2016/679 (GDPR). This paper critically examines how an unmodified AIA could enable voter manipulation, blackmail, and the generation of sexual abusive content, facilitating misinformation and potentially harming millions, both emotionally and financially. Through analysis of the AIA's provisions, GDPR's regulations, relevant case law, and academic literature, the paper identifies risks for both AI providers and users. While the AIA's yearly review cycle is important, the immediacy of these threats demands swifter action. This paper proposes two key amendments: 1) mandate structured synthetic data for deepfake detection, and 2) classify AI intended for malicious deepfakes as ‘high-risk’. These amendments, alongside clear definitions and robust safeguards would ensure effective deepfake regulation while protecting fundamental rights. The paper urges policymakers to adopt these amendments during the next review cycle to protect democracy, individual safety, and children. Only then will the AIA fully achieve its aims while safeguarding the freedoms it seeks to uphold.