Flavio Lorenzo
Techniques for trustworthy artificial intelligence systems in the context of a loan approval process.
Rel. Elena Maria Baralis. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2019
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (3MB) | Preview |
Abstract
On April 2019, the High-Level Expert Group on AI, appointed by the European Commission, presented the document “Ethics Guidelines for Trustworthy Artificial Intelligence”. In these guidelines, the group identifies seven key requirements that AI systems should meet in order to be considered trustworthy. To meet these requirements, companies need to apply a combination of both organizational and technical adjustments to their AI systems. This work focuses on two key technical aspects of a trustworthy AI: interpretability of the underlying machine learning model and fairness in the decisions taken by the system. Model interpretability can be defined as the degree to which a human can understand the cause of a decision, while machine learning fairness refers to the property of an AI system to not base its decisions on sensitive attributes, like gender or skin colour.
These concepts are extensively analysed in the first part of the thesis, and a selection of algorithms, frameworks, and tools available for supporting the processes of comprehending an AI system’s behaviour and detecting biased decisions is presented
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
