Luca Dordoni
Sparsification of deep neural networks via ternary quantization.
Rel. Enrico Magli, Giulia Fracastoro, Sophie Fosson, Andrea Migliorati, Tiziano Bianchi. Politecnico di Torino, Corso di laurea magistrale in Physics Of Complex Systems (Fisica Dei Sistemi Complessi), 2023
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (8MB) | Preview |
Abstract
In recent years, deep neural networks (DNNs) have achieved remarkable results in several machine learning tasks, especially in computer vision applications where they can often outperform human performance. Typically, deep models consist of tens of layers and millions of parameters, resulting in high memory consumption and computational overload. Conversely, the demand for smaller models is growing fast with the desire to deploy DNNs in environments with limited resources such as mobile devices. Methods to tackle this crucial challenge and obtain more compact networks while preserving performance rely on quantization or sparsification of the parameters. This thesis explores a combination of the two techniques, i.e.
a sparsification method based on the ternarization of network parameters
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
URI
![]() |
Modifica (riservato agli operatori) |
