Francesca Nuzzo
Sanity Checks for Explanations of Deep Neural Networks Predictions.
Rel. Francesco Vaccarino, Antonio Mastropietro. Politecnico di Torino, Master of science program in Mathematical Engineering, 2020
|
Preview |
PDF (Tesi_di_laurea)
- Thesis
Licence: Creative Commons Attribution Non-commercial No Derivatives. Download (18MB) | Preview |
Abstract
At the dawn of the fourth industrial revolution, the performance of Artificial Intelligence (AI) systems is reaching, or even exceeding, the human level on an increasing number of complex tasks. However, because of their nested non-linear structure, deep learning models often suffer from opacity and turn out to be uninterpretable black boxes. The lack of transparency represents a barrier to the adoption of these systems for those tasks where interpretability is essential, like autonomous driving, medical applications or finance. To overcome this drawback and to comply with the General Data Protection Regulation (GDPR), the development of algorithms for visualizing, explaining and interpreting deep neural networks predictions has recently attracted increasing attention.
Paradigms underlying this problem fall within the so-called Explainable Artificial Intelligence (XAI) field
Relators
Publication type
URI
![]() |
Modify record (reserved for operators) |
