Alessandro Ottaviano
Adversarial Machine Learning against Real-World Attacks on CNN Object Detectors.
Rel. Guido Masera, Michele Magno, Luca Benini. Politecnico di Torino, Corso di laurea magistrale in Nanotechnologies For Icts (Nanotecnologie Per Le Ict), 2020
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (14MB) | Preview |
Abstract
The past few years have witnessed a growing interest in the analysis of Machine Learning models robustness against adversarial examples, i.e. externally injected modifications to the input of a Neural Network able to pollute the predicted output correctness. It often happens that the adversarial nature of the examples is imperceptible and/or incognito with respect to the clean input yet causing a non-negligible drop in the accuracy. Therefore, several issues and open questions about the actual and effective security of modern Machine Learning models employed for different tasks, from Speech Recognition to Computer Vision, are introduced. As a matter of facts, there exists an expanding field of interest which is getting devoted to craft countermeasures for lowering the attack's strength: Adversarial Defense techniques follow different fashions according to the attack threat-model they aim to defend against.
The research community has started a process of hierarchical and methodological organization to flatten the ground of reference, thus letting them to acquire solid coherent results as well as a robust application toolset
Relatori
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
