Fulvio Di Girolamo
Fighting Fire with Fire - On the Effectiveness of Neural Backdoors in Countering Test-Time Evasion Attacks.
Rel. Cataldo Basile. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2021
Abstract
Due to its outstanding performances, over the years deep learning - i.e. the branch of machine learning which uses models known as deep neural networks - has become increasingly prevalent in several application domains, including security- or safety-sensitive ones such as anomaly detection, authentication systems, and autonomous driving, among others. In such contexts an adversary might be motivated to look for ways to induce the misclassification of given inputs, e.g. in order to pass a malware off as benign software or to provoke an accident. Test-time evasion attacks and neural backdoors are two types of attacks against deep learning models which, albeit very different in nature, both allow reaching this same adversarial goal, the former by taking advantage of malicious inputs known as adversarial samples at inference time, the latter by manipulating the training of the target model.
In this document we explore the interactions between these two attack types, with the intent of setting them against one another
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Informazioni aggiuntive
Corso di laurea
Classe di laurea
Ente in cotutela
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
