polito.it
Politecnico di Torino (logo)

Evasion attacks against machine-learning based behavioral authentication

Marco Farinetti

Evasion attacks against machine-learning based behavioral authentication.

Rel. Paolo Garza. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2018

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (1MB) | Preview
Abstract:

Authenticating a user means verifying his identity. Today, authentication systems are used for all kind of applications, in various forms: from traditional secrecy-based methods to more modern biometrics-based ones. Behavioral authentication in particular, has become very relevant as a mean of continuously verifying a user's genuineness. These systems are built on top of machine learning algorithms, and as such, they are subject to adversary attacks. Evasion attacks rely on generating adversarial instances capable of evading a classifier with small modifications. While adversarial instances have been successfully generated for differentiable models, this is not true for tree ensembles, for which the literature is very limited. In this work we evaluate the resilience of a gait-based authentication system, based on tree ensembles, against evasion attacks. First, we propose an algorithm for generating adversarial instances while constraining them to be hard to detect. Secondly we define metrics for evaluating the evasion performance. Finally, we deploy a defensive mechanism to detect adversarial instances before they are submitted to the tree ensemble. We validate our research by testing the attack in multiple scenarios, using a dataset of gait data collected from 17 subjects and 9 body positions. We show that adversarial instances can be generated starting from the adversary own data. Our analysis shows that instances generated in this way are hard to detect, with evasion rates between 60% and 90%, depending on the constraints imposed during the generation. Our defensive mechanism is able to reject up to 40% of the adversarial instances generated.

Relatori: Paolo Garza
Anno accademico: 2018/19
Tipo di pubblicazione: Elettronica
Numero di pagine: 57
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Ente in cotutela: KUL - Katholieke Universiteit Leuven (BELGIO)
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/8977
Modifica (riservato agli operatori) Modifica (riservato agli operatori)