Show simple item record

dc.contributor.authorRomero-Moreno, Felipe
dc.date.accessioned2024-04-04T12:45:02Z
dc.date.available2024-04-04T12:45:02Z
dc.date.issued2024-03-29
dc.identifier.citationRomero-Moreno , F 2024 , ' Generative AI and deepfakes: a human rights approach to tackling harmful content ' , International Review of Law, Computers & Technology . https://doi.org/10.1080/13600869.2024.2324540
dc.identifier.issn1360-0869
dc.identifier.otherORCID: /0000-0001-7545-7740/work/157084049
dc.identifier.urihttp://hdl.handle.net/2299/27710
dc.description© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/ licenses/by/4.0/).
dc.description.abstractThe EU's Artificial Intelligence Act (AIA) introduces necessary deepfake regulations. However, these could infringe on the rights of AI providers and deployers or users, potentially conflicting with privacy and free expression under Articles 8 and 10 of the European Convention on Human Rights, and the General Data Protection Regulation (EU) 2016/679 (GDPR). This paper critically examines how an unmodified AIA could enable voter manipulation, blackmail, and the generation of sexual abusive content, facilitating misinformation and potentially harming millions, both emotionally and financially. Through analysis of the AIA's provisions, GDPR's regulations, relevant case law, and academic literature, the paper identifies risks for both AI providers and users. While the AIA's yearly review cycle is important, the immediacy of these threats demands swifter action. This paper proposes two key amendments: 1) mandate structured synthetic data for deepfake detection, and 2) classify AI intended for malicious deepfakes as ‘high-risk’. These amendments, alongside clear definitions and robust safeguards would ensure effective deepfake regulation while protecting fundamental rights. The paper urges policymakers to adopt these amendments during the next review cycle to protect democracy, individual safety, and children. Only then will the AIA fully achieve its aims while safeguarding the freedoms it seeks to uphold.en
dc.format.extent31
dc.format.extent2449318
dc.language.isoeng
dc.relation.ispartofInternational Review of Law, Computers & Technology
dc.subjectDeepfake regulation
dc.subjectHuman rights
dc.subjectGenerative AI
dc.subjectPolitical disinformation
dc.subjectGDPR
dc.subjectgenerative AI
dc.subjecthuman rights
dc.subjectLaw
dc.subjectComputer Science Applications
dc.titleGenerative AI and deepfakes: a human rights approach to tackling harmful contenten
dc.contributor.institutionCentre for Future Societies Research
dc.contributor.institutionLaw
dc.contributor.institutionHertfordshire Law School
dc.description.statusPeer reviewed
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85189609444&partnerID=8YFLogxK
rioxxterms.versionofrecord10.1080/13600869.2024.2324540
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record