Politecnico di Torino (logo)

Adversarial Machine Learning against Real-World Attacks on CNN Object Detectors

Alessandro Ottaviano

Adversarial Machine Learning against Real-World Attacks on CNN Object Detectors.

Rel. Guido Masera, Michele Magno, Luca Benini. Politecnico di Torino, Corso di laurea magistrale in Nanotechnologies For Icts (Nanotecnologie Per Le Ict), 2020

PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (14MB) | Preview

The past few years have witnessed a growing interest in the analysis of Machine Learning models robustness against adversarial examples, i.e. externally injected modifications to the input of a Neural Network able to pollute the predicted output correctness. It often happens that the adversarial nature of the examples is imperceptible and/or incognito with respect to the clean input yet causing a non-negligible drop in the accuracy. Therefore, several issues and open questions about the actual and effective security of modern Machine Learning models employed for different tasks, from Speech Recognition to Computer Vision, are introduced. As a matter of facts, there exists an expanding field of interest which is getting devoted to craft countermeasures for lowering the attack's strength: Adversarial Defense techniques follow different fashions according to the attack threat-model they aim to defend against. The research community has started a process of hierarchical and methodological organization to flatten the ground of reference, thus letting them to acquire solid coherent results as well as a robust application toolset. This document aims first at analyzing the effectiveness of adversarial patch attacks targeting misdetection against benchmark Convolutional Neural Networks (CNN) objects and face detectors - You Only Look Once (YOLO), Single Shot Detector (SSD) and Multitask Cascaded CNN (MTCNN) - while keeping as steady as possible the setup flow, and assessing the obtained results in terms of model's accuracy drop. Simulations are performed in the framework of white box digital attacks, where the attacker owns full knowledge about the model's weights and tests are performed without addressing assaults in the physical world. The experiments lead to confirm the already diffused awareness that, in general, a neural network is deceivable even though with heterogeneous degrees of damage according to its architectural structure and complexity. Thereafter, the second task of the project concerns introducing a defense against CNN object detectors under attack. De-randomized smoothing by structured ablation is chosen from the set of existing defenses. The adopted solution, originally developed for targeting exquisitely the domain of pure classification, is adjusted to meet the constraints introduced in the framework of object detection. Ablation defense addresses the inference phase in the form of an image pre-processing step, and it is still evaluated in the domain of digital adversarial patch attacks. It is shown that the applied defense can decrease adversary success rate from 40% to 6% on the tested dataset (video frames) under attack when switching between the undefended and defended cases, respectively. The pre-processing module is subsequently implemented as a combinational block in the RTL of an existing FPGA-based object detector example flow, simulated and synthesized to address (a) the overall resource usage and (b) related real-time based constraints defined by the CNN accelerator inference time speed and the camera sensors data acquisition time, in order to start the transition from digital to real-world based defenses at edge computing level.

Relators: Guido Masera, Michele Magno, Luca Benini
Academic year: 2020/21
Publication type: Electronic
Number of Pages: 137
Corso di laurea: Corso di laurea magistrale in Nanotechnologies For Icts (Nanotecnologie Per Le Ict)
Classe di laurea: New organization > Master science > LM-29 - ELECTRONIC ENGINEERING
Ente in cotutela: ETH - Integrated Systems Laboratory (SVIZZERA)
Aziende collaboratrici: ETH Zurich
URI: http://webthesis.biblio.polito.it/id/eprint/15938
Modify record (reserved for operators) Modify record (reserved for operators)