Gilberto Manunza
On the Impact of Adversarial Training on Uncertainty Estimation and Uncertainty Targeted Attacks.
Rel. Barbara Caputo, Martin Jaggi, Matteo Matteucci. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2021
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (7MB) | Preview |
Abstract
State of the art deep learning models, despite being successful in many applications,have the problem of being sensitive to small perturbations in the input data. These perturbations can be easily crafted by an adversary in order to attack a neural network to reduce its performances. This problem raises many reliability and security concerns about the deployment of deep learning models in real world applications. Adversarial training methods aim at improving the robustness of the model to such attacks, but many of them – including state of the art techniques like Projected Gradient Descent (PGD)– often lead to networks with lower unperturbed (clean) accuracy.
Additionally some fast adversarial training techniques, e.g
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Ente in cotutela
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
