Artificial intelligence (AI) has brought many societal changes and is transforming the way to do business. AI systems are pushing the boundaries of machine capabilities, cutting down the time required to complete specific tasks, enabling the accomplishment of complex operations that exceed human capacity, and easing repetitive decision-making processes. Yet, AI solutions can have detrimental consequences for individuals and society. In particular, the processing of information using AI systems also entails risks to the rights and freedoms of individuals, such as the lack of respect for human autonomy, the production of material or moral harms, discrimination, and lack of transparency in the decision-making process. Whereas these features reveal the potential of AI it also triggers many crucial questions, in particular, regarding the adequacy and sufficiency of the EU legal framework on data protection to protect the rights and freedoms of individuals.   When it comes to the interaction of AI and data protection, researchers dedicated much of their efforts to particular fields: the explainability of AI systems and the provisions concerning the right not to be subject to automated decisions established in the GDPR and also concerning the fairness of the decisions taken using AI systems. However, while many works address both the challenges and solutions concerning the processing of personal data using AI systems, this thesis differs from others in two crucial aspects. First, most of the materials reviewed conceived AI systems in general without clearly explaining the concept of AI, the differences between the different AI systems or models, and how these differences impact the fundamental rights of individuals. Secondly, most reviewed materials provide highly detailed explanations about the legal concepts involved, present new interpretations, and offer recommendations about what should be done to improve individuals' general situation at the legislative level. However, there is in general a disconnection on how the legal concepts relate to operative aspects of data protection. This thesis attempts to bridge, on the one hand, the gap between the legislative and judicial interpretation and, on the other hand, the practical and operative aspects concerning the protection of personal data.   Against this backdrop, the general objective of this thesis is to evaluate the extent to which the processing of personal data using AI systems satisfies the requirements outlined in the GDPR. This thesis’s central enquiry regards how individuals can be better protected from the risks posed by artificial intelligence systems.  This thesis is structured in 5 chapters as follows. Chapter I elaborates on the concept and importance of data, in particular personal data, for the development of AI systems and the conceptualization of artificial intelligence, considering its three most important techniques. Chapter II explains how personal data is protected in Europe. Chapter III provides an in-depth appraisal of the protection of individual rights by the GDPR when personal data is processed using AI systems. Whereas a large part of this chapter is devoted to the rights related to automated decision-making, the remaining rights listed by the GDPR are also evaluated. Chapter IV provides more details about the limitations of the current regime and explains how the weaknesses previously identified could be overcome. It focuses on two of the most important risks to the rights and freedoms of individuals: algorithmic transparency and fairness and discrimination. Finally, Chapter V explores other governance mechanisms to further reduce the risks presented by the use of AI systems to process personal data. 

The challenges of the General Data Protection Regulation to protect data subjects against the adverse effects of artificial intelligence 

MARENGO, FEDERICO
2023

Abstract

Artificial intelligence (AI) has brought many societal changes and is transforming the way to do business. AI systems are pushing the boundaries of machine capabilities, cutting down the time required to complete specific tasks, enabling the accomplishment of complex operations that exceed human capacity, and easing repetitive decision-making processes. Yet, AI solutions can have detrimental consequences for individuals and society. In particular, the processing of information using AI systems also entails risks to the rights and freedoms of individuals, such as the lack of respect for human autonomy, the production of material or moral harms, discrimination, and lack of transparency in the decision-making process. Whereas these features reveal the potential of AI it also triggers many crucial questions, in particular, regarding the adequacy and sufficiency of the EU legal framework on data protection to protect the rights and freedoms of individuals.   When it comes to the interaction of AI and data protection, researchers dedicated much of their efforts to particular fields: the explainability of AI systems and the provisions concerning the right not to be subject to automated decisions established in the GDPR and also concerning the fairness of the decisions taken using AI systems. However, while many works address both the challenges and solutions concerning the processing of personal data using AI systems, this thesis differs from others in two crucial aspects. First, most of the materials reviewed conceived AI systems in general without clearly explaining the concept of AI, the differences between the different AI systems or models, and how these differences impact the fundamental rights of individuals. Secondly, most reviewed materials provide highly detailed explanations about the legal concepts involved, present new interpretations, and offer recommendations about what should be done to improve individuals' general situation at the legislative level. However, there is in general a disconnection on how the legal concepts relate to operative aspects of data protection. This thesis attempts to bridge, on the one hand, the gap between the legislative and judicial interpretation and, on the other hand, the practical and operative aspects concerning the protection of personal data.   Against this backdrop, the general objective of this thesis is to evaluate the extent to which the processing of personal data using AI systems satisfies the requirements outlined in the GDPR. This thesis’s central enquiry regards how individuals can be better protected from the risks posed by artificial intelligence systems.  This thesis is structured in 5 chapters as follows. Chapter I elaborates on the concept and importance of data, in particular personal data, for the development of AI systems and the conceptualization of artificial intelligence, considering its three most important techniques. Chapter II explains how personal data is protected in Europe. Chapter III provides an in-depth appraisal of the protection of individual rights by the GDPR when personal data is processed using AI systems. Whereas a large part of this chapter is devoted to the rights related to automated decision-making, the remaining rights listed by the GDPR are also evaluated. Chapter IV provides more details about the limitations of the current regime and explains how the weaknesses previously identified could be overcome. It focuses on two of the most important risks to the rights and freedoms of individuals: algorithmic transparency and fairness and discrimination. Finally, Chapter V explores other governance mechanisms to further reduce the risks presented by the use of AI systems to process personal data. 
29-giu-2023
Inglese
35
2021/2022
LEGAL STUDIES
Settore IUS/08 - Diritto Costituzionale
POLLICINO, ORESTE
File in questo prodotto:
File Dimensione Formato  
Thesis_Marengo_Federico.pdf

accesso aperto

Descrizione: Thesis_Marengo_Federico
Tipologia: Tesi di dottorato
Dimensione 2.26 MB
Formato Adobe PDF
2.26 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11565/4058474
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact