Politecnico di Torino (logo)

Techniques for trustworthy artificial intelligence systems in the context of a loan approval process

Flavio Lorenzo

Techniques for trustworthy artificial intelligence systems in the context of a loan approval process.

Rel. Elena Maria Baralis. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2019

PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (3MB) | Preview

On April 2019, the High-Level Expert Group on AI, appointed by the European Commission, presented the document “Ethics Guidelines for Trustworthy Artificial Intelligence”. In these guidelines, the group identifies seven key requirements that AI systems should meet in order to be considered trustworthy. To meet these requirements, companies need to apply a combination of both organizational and technical adjustments to their AI systems. This work focuses on two key technical aspects of a trustworthy AI: interpretability of the underlying machine learning model and fairness in the decisions taken by the system. Model interpretability can be defined as the degree to which a human can understand the cause of a decision, while machine learning fairness refers to the property of an AI system to not base its decisions on sensitive attributes, like gender or skin colour. These concepts are extensively analysed in the first part of the thesis, and a selection of algorithms, frameworks, and tools available for supporting the processes of comprehending an AI system’s behaviour and detecting biased decisions is presented. Several solutions that can be adopted to mitigate the biases embedded in an AI system are also discussed. In the second part of the work, the topics of interpretability and fairness are applied to the use case of a loan approval process. The algorithms and frameworks presented in the first part of the thesis are exploited to build an internet-based application that allows the user to manage the whole life cycle of a machine learning model, provide an interpretation of the model’s output, and monitor the model’s decisions to detect and react to unfair behaviours. An overview of the architecture and interface of this model management application is presented, and the most interesting components of the application are discussed in detail and compared to existing solutions.

Relators: Elena Maria Baralis
Academic year: 2019/20
Publication type: Electronic
Number of Pages: 97
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: New organization > Master science > LM-32 - COMPUTER SYSTEMS ENGINEERING
Aziende collaboratrici: Blue Reply Srl
URI: http://webthesis.biblio.polito.it/id/eprint/13176
Modify record (reserved for operators) Modify record (reserved for operators)