Giovanni Gaddi
Post-Training Quantization of a Transformer-based Autonomous Driving Neural Network.
Rel. Mario Roberto Casu, Edward Manca. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2025
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Share Alike. Download (3MB) | Preview |
Abstract
The ongoing development of Autonomous Driving (AD) systems has resulted in an increased demand for perception models that combine high accuracy with computational and energy efficiency. Recent advancements include BEVFusion, a state-of-the-art multi-sensor fusion Neural Network (NN) framework that combines camera and LiDAR into a unified representation called Bird's-Eye View (BEV). This approach enables robust spatial reasoning and 3D object detection. In this scenario, BEVFusion reaches competitive performance on large-scale benchmarks such as NuScenes, a dataset for multi-modal AD NNs, providing training data from camera and LiDAR sensors with 3D object annotations. However, BEVFusion's computational and memory requirements make real-time deployment on embedded or resource-constrained devices extremely difficult despite its high accuracy.
An important optimization in NN deployment is quantization
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
