Why the generative AI models do not like the right to be forgotten: a study of proportionality of identified limitations
PDF

Keywords

the right to be forgotten
the data producer’s right
the essence of fundamental rights
proportionality
AI Act
Data Act
machine unlearning

How to Cite

Anna Popowicz-Pazdej, A. (2023). Why the generative AI models do not like the right to be forgotten: a study of proportionality of identified limitations. Przegląd Prawniczy Uniwersytetu Im. Adam Mickiewicza, 15, 217–239. https://doi.org/10.14746/ppuam.2023.15.10

Abstract

The article explores the limitation of one of the privacy and data protection rights when using generative AI models. The identified limitation is assessed from the perspective of the ‘essence’ of the right to the protection of personal data. With the further aim of assessing the limitation, the author explores whether the right to be forgotten (RTBF) is relevant or effective in an AI/machine learning context. These considerations are focused on the technical problems encountered when applying the strict interpretation of the RTBF. In particular, the antagonism between, on the one hand, the values of privacy and data protection rights, and on the other, the technical capabilities of the producer of the generative AI models, is further analysed in this context. As the conclusion emphasizes that the RTBF cannot be practicably or effectively exercised in the machine learning models, further considerations of this exposed limitation are presented. The proportionality principle, as an instrument that supports the proper application if there is any limitation of the conflicting rights, has been utilized to depict the qualitative approach. The integration of this principle supports the conclusion by identifying a more efficient way to address some regulatory issues. Hence, the conclusion of the article presents some suggested solutions as to the interpretation of this right in the light of this new technological advancement. Ultimately, the paper aims to address the legal conundrum of how to balance the conflict between the interest of innovative use of the data (the data producer’s right) and privacy and data protection rights.

https://doi.org/10.14746/ppuam.2023.15.10
PDF

References

Alexy, Robert. A Theory of Constitutional Rights. Oxford, New York, 2002.

Alexy, Robert. “Constitutional Rights, Balancing, and Rationality.” Ratio Juris 16, no. 2. 2003: 131–140. DOI: https://doi.org/10.1111/1467-9337.00228

Ambrose, Meg Leta. “It’s About Time: Privacy, Information Life Cycles, and the Right to Be Forgotten.” Stanford Technology Law Review 16, no. 2. 2013: 369–422.

Ambrose, Meg Leta, and Jef Ausloos. “The Right To Be Forgotten Across the Pond.” Journal Of Information Policy 3. 2013: 1–23. DOI: https://doi.org/10.5325/jinfopoli.3.2013.0001

Christoph, Bieber. “Datenschutz als politisches Thema – von der Volkszählung zur Piratenpartei.” In Datenschutz. Grundlagen, Entwicklungen und Kontroversen [Data privacy: Fundamentals, developments, controversies], edited by Jan-Hinrik Schmidt, and Thilo Weichert. Bonn, 2012: 34–44.

Burrell, Jenna. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms.” Big Data & Society 3, no. 1. 2016: 1–12. DOI: https://doi.org/10.1177/2053951715622512

Forde, Aidan. “Implications of the right to be forgotten.” Tulane Journal of Technology & Intellectual Property 18. 2015: 83–131.

Fosch-Villaronga, Eduard, Peter Kieseberg, and Tiffany Li. “Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten.” Computer Law & Security Review 34, no. 2. 2018: 304–313. DOI: https://doi.org/10.1016/j.clsr.2017.08.007

Gangjee, Dev Saif. “The Data Producer’s Right: An Instructive Obituary.” In The Cambridge Handbook of Private Law and Artificial Intelligence, edited by Ernest Lim, and Phillip Morgan. Cambridge, 2022.

Goodman, Bryce, and Seth Flaxman. “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’.” Presented at ICML Workshop on Human Interpretability in Machine Learning (WHI 2016). New York, NY, June 2016.

Gryz, Jarek, and Marcin Rojszczak, “Black box algorithms and the rights of individuals: no easy solution to the ‘explainability’ problem.” Internet Policy Review 10, no. 2. 2021: 1–24. DOI: https://doi.org/10.14763/2021.2.1564

Lenaerts, Koen. “Exploring the limits of the EU Charter of Fundamental Rights.” European Constitutional Law Review 8, no. 3. 2012: 375–403. DOI: https://doi.org/10.1017/S1574019612000260

Lobo, Jesús López, Sergio Gil-Lopez, and Javier Del Ser. “The Right to Be Forgotten in Artificial Intelligence: Issues, Approaches, Limitations and Challenges.” In 2023 IEEE Conference on Artificial Intelligence (IEEE CAI). Santa Clara, California, USA, 5–6 June 2023: 179–180. DOI: https://doi.org/10.1109/CAI54212.2023.00085

Novelli, Claudio, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, and Luciano Floridi. “How to Evaluate the Risks of Artificial Intelligence: A Proportionality-Based, Risk Model for the AI Act (May 31, 2023). Available at SSRN: https://ssrn.com/abstract=4464783 DOI: https://doi.org/10.2139/ssrn.4464783

Politou, Eugenia, Efthimios Alepis, and Constantinos Patsakis. “Forgetting personal data and revoking consent under the GDPR: Challenges and proposed solutions.” Journal of cybersecurity 4.1 (2018): tyy001. DOI: https://doi.org/10.1093/cybsec/tyy001

Popowicz-Pazdej, Anna. “The proportionality between trade secret and privacy protection – how to strike the right balance when designing generative AI tools.” Journal of Privacy & Data Protection 6, no. 2. 2023: 153–167.

Popowicz-Pazdej, Anna. “The proportionality principle in privacy and data protection law.” Journal of Data Protection & Privacy 4, no. 3. 2021: 322–331.

Qu, Youyang, Xin Yuan, Ming Ding, Wei Ni, Thierry Rakotoarivelo, and David Smith. “Learn to Unlearn: A Survey on Machine Unlearning.” IEEE Computer Magazine 2023. DOI: https://doi.org/10.1109/MC.2023.3333319

Singh, Ajay Pal, and Rahil Setia. “Right to Be Forgotten Recognition, Legislation and Acceptance in International and Domestic Domain.” Nirma University Law Journal 6, no. 2. 2018: 37–56.

Stepanov, Ivan. “Introducing a property right over data in the EU: the data producer’s right – an evaluation.” International Review of Law, Computers & Technology 34, no. 1, 2020: 65–86. DOI: https://doi.org/10.1080/13600869.2019.1631621

Tsakyrakis, Stavros. “Proportionality: an assault on human rights?” Jean Monet Working Paper no. 09/08, <https://jeanmonnetprogram.org/paper/proportionality-an-assault-on-human-rights-2/>.

Yan Haonan, Xiaoguang Li, Ziyao Guo, Hui Li, Fenghua Li, and Xiaodong Lin. “Arcane: An efficient architecture for exact machine unlearning.” In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. Vienna, 23–29 July 2022: 4006–4013. DOI: https://doi.org/10.24963/ijcai.2022/556

Zhang, Dawen, Pamela Finckenberg-Broman, Thong Hoang, Shidong Pan, Zhenchang Xing, Mark Staples, and Xiwei Xu. “Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions.” 2023, <https://arxiv.org/pdf/2307.03941.pdf>.