
Claudio Zudettich
Risk management tools for AI risk assessment.
Rel. Alessandro Mantelero, Maria Samantha Esposito. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Gestionale (Engineering And Management), 2025
Abstract: |
This thesis explores the origins and rationale behind the requirement for a Fundamental Rights Impact Assessment (FRIA) within the AI Act, examining how the EU legislator has structured the assessment of fundamental rights impacts and identifying the methodological principles necessary for its effective implementation. Through a detailed legal analysis of the AI Act and a review of various assessment frameworks, the thesis provides a comprehensive perspective on these critical issues. The primary aim of this thesis is to address gaps in the theoretical and methodological development of the FRIA as envisioned in the AI Act. By designing a software tool that translates the FRIA methodology into a practical application, the project seeks to support EU and national authorities, as well as AI developers, in embedding this essential mechanism into the broader framework of human-centric and trustworthy AI. The proposed model not only aligns with the scope of the AI Act but also extends its applicability beyond Article 27, offering a versatile template for other regulatory contexts to ensure AI systems respect human rights. In the context of the rapidly evolving regulatory landscape shaped by the AI Act and related frameworks like the General Data Protection Regulation (GDPR), this thesis takes a closer look at the foundations of the FRIA. It examines how the FRIA is structured, the methodology behind it, and how it can be applied in different scenarios. By combining legal analysis with practical insights, the thesis contributes to the broader conversation on AI governance, offering clear and actionable strategies to manage risks while protecting fundamental rights. The development of this program was an enriching and challenging experience that required careful planning, creative problem-solving, and a commitment to iterative improvement. The primary goal was to design a user-friendly tool capable of assessing risks associated with AI systems, generating impact tables, and facilitating effective risk management. From the very beginning, the focus was on accessibility, ensuring that users—regardless of their technical background—could navigate the tool confidently and extract meaningful insights from it. In conclusion, this thesis emphasizes the necessity of aligning AI innovation with ethical and legal standards. It proposes a flexible and actionable model for conducting Fundamental Rights Impact Assessments (FRIA), aimed at fostering the responsible development of AI systems that uphold principles of fairness, accountability, and human dignity in a rapidly evolving technological landscape. To contextualize and strengthen this approach, the thesis includes a comparative analysis between FRIA and the Algorithmic Impact Assessment (AIA) model, as developed by the Government of Canada. While the AIA offers a quantitative, policy-oriented framework for evaluating the technical and administrative risks of automated decision systems, FRIA provides a more normative, rights-based perspective focused on the potential infringement of fundamental rights. This comparison highlights the complementary nature of the two approaches and underscores the value of integrating both technical and legal-ethical considerations in AI governance. |
---|---|
Relatori: | Alessandro Mantelero, Maria Samantha Esposito |
Anno accademico: | 2024/25 |
Tipo di pubblicazione: | Elettronica |
Numero di pagine: | 72 |
Informazioni aggiuntive: | Tesi secretata. Fulltext non presente |
Soggetti: | |
Corso di laurea: | Corso di laurea magistrale in Ingegneria Gestionale (Engineering And Management) |
Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-31 - INGEGNERIA GESTIONALE |
Aziende collaboratrici: | NON SPECIFICATO |
URI: | http://webthesis.biblio.polito.it/id/eprint/36028 |
![]() |
Modifica (riservato agli operatori) |