Michael Elias
Leveraging Quantization and Approximate Computing to Enhance Adversarial Defense in Deep Neural Networks.
Rel. Maurizio Martina, Guido Masera, Flavia Guella. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2025
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract
Over the last few years, Convolutional Neural Networks (CNNs) and other deep neural network architectures have been used increasingly across multiple domains. Such as computer vision, autonomous driving, and medicine. This widespread usage of CNNs has exposed them to adversarial attacks: applying deliberate perturbation to input data with the goal of forcing the CNN to produce wrong results. Quantization and Approximate Computing (AC) were originally introduced to reduce CNNs’ memory and computational cost. Furthermore, recent works have demonstrated that the noise they introduce could enhance input features, thereby reducing the likelihood of the adversarial fooling the CNN. In this study, we explore the effect of quantization and AC on the robustness of CNNs.
We propose a software framework to train and evaluate quantized CNNs with support for layerwise approximation
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
