Giovanni Caramia
Adversarial Attacks for Convolutional Neural Networks and Capsule Networks.
Rel. Maurizio Martina, Andreas Steininger, Muhammad Shafique. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2021
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (29MB) | Preview |
Abstract
In Computer Vision (CV) context, the image classification is a supervised learning problem which has several applications in all fields such as autonomous driving, medical diagnosing, remote sensing and so on. In the Deep Learning (DL) field, the image classification problem is solved by several architectures: from the simple Convolutional Neural Networks (CNNs) to the more complex models such as Capsule Networks (CapsNets). An important aspect of image classification is the robustness of the architectures against adversarial attacks: an image can be misclassified crafting a small perturbation to the input. In this dissertation, the adversarial attacks are crafted on Residual Neural Networks (ResNet) and CapsNet models.
In a CapsNet configuration, another way to inject attacks is done through Vote Attack, which directly attacks the votes instead of output capsules
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
URI
![]() |
Modifica (riservato agli operatori) |
