polito.it
Politecnico di Torino (logo)

Analysis of robust neural network classifiers against gradient-based attacks

Federico Micelli

Analysis of robust neural network classifiers against gradient-based attacks.

Rel. Enrico Magli, Tiziano Bianchi. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2021

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (12MB) | Preview
Abstract:

During the last few years, deep learning has been applied to a huge number of applications in the field of multimedia, scientific research, and industrial processes. Over an increasing number of specific visual classification problems, the performance of these algorithms has reached greater accuracy levels than human capabilities. However, the ever-growing employment of neural networks in our society pones serious warnings in the matter of security, as they can be targeted by malevolent adversaries. Many barriers affect the use of deep neural networks in applications where security is of key importance, such as medical diagnostics and autonomous driving. One of the most severe flaws of deep learning is represented by adversarial attacks, a collection of methods that are designed to interfere with neural networks input data to produce undesired outputs or, in general, to cause algorithm malfunctions and classification accuracy reduction. This happens in the face of perturbations that are very difficult to detect. Indeed, the adversarial examples generated by these attacks are often undetectable by human eyes. This thesis aims at investigating the adversarial robustness of the Gaussian Class Conditional Simplex (GCCS) method, a novel defense against adversarial examples designed at Politecnico di Torino. In particular, after providing insights into the fields of neural networks, deep learning, and adversarial attacks, this thesis reports a thorough experimental evaluation that illustrates the greater robustness of the GCCS method against a multitude of attacks with respect to competing state-of-the-art techniques. First, I carried out experiments with Adversarial Training (AT), a common approach based on providing adversarial examples at training time. I show how robustness is greatly improved thanks to AT. Further, I have obtained the most robust model by combining GCCS and other state-of-the-art techniques. Such a model has then been employed in a series of tests devoted to demonstrating that in most cases GCCS classification method outperforms other techniques independently on the considered attack. These results have also been confirmed by plotting features distributions in the latent space: in the case of GCCS, these are well-separable so that high inter-class separation is ensured. I also show how other state-of-the-art techniques tend to mix features belonging to different classes with the subsequent misclassification.

Relatori: Enrico Magli, Tiziano Bianchi
Anno accademico: 2021/22
Tipo di pubblicazione: Elettronica
Numero di pagine: 97
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-29 - INGEGNERIA ELETTRONICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/21235
Modifica (riservato agli operatori) Modifica (riservato agli operatori)