Politecnico di Torino (logo)

Sanity Checks for Explanations of Deep Neural Networks Predictions

Francesca Nuzzo

Sanity Checks for Explanations of Deep Neural Networks Predictions.

Rel. Francesco Vaccarino, Antonio Mastropietro. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Matematica, 2020

PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (18MB) | Preview

At the dawn of the fourth industrial revolution, the performance of Artificial Intelligence (AI) systems is reaching, or even exceeding, the human level on an increasing number of complex tasks. However, because of their nested non-linear structure, deep learning models often suffer from opacity and turn out to be uninterpretable black boxes. The lack of transparency represents a barrier to the adoption of these systems for those tasks where interpretability is essential, like autonomous driving, medical applications or finance. To overcome this drawback and to comply with the General Data Protection Regulation (GDPR), the development of algorithms for visualizing, explaining and interpreting deep neural networks predictions has recently attracted increasing attention. Paradigms underlying this problem fall within the so-called Explainable Artificial Intelligence (XAI) field. This class comprises a suite of methods and algorithms enabling humans to understand, trust and effectively manage the emerging generation of artificial intelligent partners. Over a relatively short period of time a plethora of explanation methods and strategies have come into existence, whose purpose is to highlight the regions of the input, typically an image, that are mostly responsible for reaching a certain prediction. However, despite the significant performances, it remains the difficulty of assessing the scope and quality of results provided by such explanation methods. The goal of this thesis is to validate the explanations of deep neural networks predictions for the computer vision, generated by some state-of-the-art methods recently proposed. The experiments conducted in this work aim to answer to the following question: who assures us that the explanation provided by the method actually tell us reliably about what the network has learned to arrive at that decision? Along this lines, a sanity check taken from the recent literature is described and experiments are performed to assess the sensitivity of explanation methods to model parameters. If one method really highlights the most important regions of the input, randomly reinitializing the parameters of the last layer of the network, then changing the output, the explanation given should change. Surprisingly, some of the methods proposed in literature are model independent and therefore fail the randomization test. By providing the same explanation even after the model parameter randomization, such methods are inadequate to faithfully explain the network prediction. The reliability of explanation methods is a crucial aspect in tasks where visual inspection of results is not easy or the costs of incorrect attribution is high. The analysis conducted in this thesis aims to provide useful insights into developing better and more reliable visualization methods for deep neural networks, in order to gain the trust of even the most skeptical users.

Relators: Francesco Vaccarino, Antonio Mastropietro
Academic year: 2020/21
Publication type: Electronic
Number of Pages: 106
Corso di laurea: Corso di laurea magistrale in Ingegneria Matematica
Classe di laurea: New organization > Master science > LM-44 - MATHEMATICAL MODELLING FOR ENGINEERING
Aziende collaboratrici: ADDFOR S.p.A
URI: http://webthesis.biblio.polito.it/id/eprint/15595
Modify record (reserved for operators) Modify record (reserved for operators)