Gabriele Quaranta
Adversarial Patch Attacks Against Deep-Learning Based UAV Detection.
Rel. Enrico Magli. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2025
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (17MB) | Preview |
Abstract
The increased use of Unmanned Aerial Vehicles (UAVs) in both military and civilian domains has pushed the development of deep learning-based systems for automated detection. While effective these systems are vulnerable to adversarial attacks that can undermine their reliability. This thesis explores the design and implementation of adversarial patch attacks intended to evade state of the art detectors. The research begins by evaluating existing adversarial patches from the literature, finding their effec- tiveness to be limited when transferred to new models, which underscores the need for a custom tailored approach. This motivates the development of a white-box attack strategy targeting the YOLO family of object detectors (v5, v8, and v10).
Initial experiments revealed a critical insight: an adversarial signal’s efficacy is dramatically enhanced when applied as a repeating, tiled pattern across the object’s surface, rather than as an isolated patch
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
