polito.it
Politecnico di Torino (logo)

New strategies for Deep Neural Networks explainability

Cosmin Vrinceanu

New strategies for Deep Neural Networks explainability.

Rel. Edgar Ernesto Sanchez Sanchez, Annachiara Ruospo. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2022

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (5MB) | Preview
Abstract:

Convolutional Neural Networks (CNNs) are ubiquitous and seamlessly integrated in our lives. The intrinsic opacity that those techniques entail is however a problem. The issue of explainability is particularly important when it is necessary to answer questions about a CNN’s reliability and resilience in case of faults. This thesis aims at shining a light on a CNN’s internal behavior by building a tool that enables researchers to observe, in a 3D virtual environment, how the different artificial neurons forming the network contribute to the classification of a given input. The tool accepts a description of how the network is built and takes as input a file describing the network parameters. As a case study, LeNET was used in this thesis. A 3D representation of the network is computed from an input image of size 28x28 pixels and rendered to the screen with the ability to move around, and zoom into its layers. The most expensive computational operations are executed resorting to the system GPU, improving in this way the system performance. The tool also offers a number of functions allowing users to study specific subsets of artificial synapses, inject different types of faults into specific artificial neurons, and automate those operations via external scripting. The end result can be explored in real time by "flying" in the 3D virtual environment with a mouse and keyboard or by looking at a log file that gets updated on user interactions. Lastly, the thesis takes into consideration a case study where LeNET is normally able to correctly classify an input image but carefully placed faults slowly erode the confidence level in the end result until a wrong classification is provided solely because of faults injected in strategic positions.

Relatori: Edgar Ernesto Sanchez Sanchez, Annachiara Ruospo
Anno accademico: 2021/22
Tipo di pubblicazione: Elettronica
Numero di pagine: 39
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/23498
Modifica (riservato agli operatori) Modifica (riservato agli operatori)