Federico Micelli
Analysis of robust neural network classifiers against gradient-based attacks.
Rel. Enrico Magli, Tiziano Bianchi. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2021
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (12MB) | Preview |
Abstract
During the last few years, deep learning has been applied to a huge number of applications in the field of multimedia, scientific research, and industrial processes. Over an increasing number of specific visual classification problems, the performance of these algorithms has reached greater accuracy levels than human capabilities. However, the ever-growing employment of neural networks in our society pones serious warnings in the matter of security, as they can be targeted by malevolent adversaries. Many barriers affect the use of deep neural networks in applications where security is of key importance, such as medical diagnostics and autonomous driving. One of the most severe flaws of deep learning is represented by adversarial attacks, a collection of methods that are designed to interfere with neural networks input data to produce undesired outputs or, in general, to cause algorithm malfunctions and classification accuracy reduction.
This happens in the face of perturbations that are very difficult to detect
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
URI
![]() |
Modifica (riservato agli operatori) |
