Francesca Nuzzo
Sanity Checks for Explanations of Deep Neural Networks Predictions.
Rel. Francesco Vaccarino, Antonio Mastropietro. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Matematica, 2020
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (18MB) | Preview |
Abstract
At the dawn of the fourth industrial revolution, the performance of Artificial Intelligence (AI) systems is reaching, or even exceeding, the human level on an increasing number of complex tasks. However, because of their nested non-linear structure, deep learning models often suffer from opacity and turn out to be uninterpretable black boxes. The lack of transparency represents a barrier to the adoption of these systems for those tasks where interpretability is essential, like autonomous driving, medical applications or finance. To overcome this drawback and to comply with the General Data Protection Regulation (GDPR), the development of algorithms for visualizing, explaining and interpreting deep neural networks predictions has recently attracted increasing attention.
Paradigms underlying this problem fall within the so-called Explainable Artificial Intelligence (XAI) field
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
