Yu Hao
Implementation of a hardware accelerator for Deep Neural Networks based on Sparse Representations of Feature Maps.
Rel. Maurizio Martina. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2022
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (10MB) | Preview |
|
|
Archive (ZIP) (Documenti_allegati)
- Altro
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (18MB) |
Abstract
Abstract Deep learning, as one of the most currently remarkable machine learning techniques, has achieved great success in many fields such as speech recognition, image analysis, and autonomous driving. However, the neural network requires billions of multiply-and-accumulated operations, which makes the single-frame runtime enormous and energy-hungry. To optimize these imperfections, researchers from the University of Zurich and ETH Zurich developed a hardware accelerator named NullHop, which is a flexible and efficient hardware accelerator architecture aiming at exploiting the sparsity of neuron activations. NullHop uses a novel sparse matrix compression algorithm to encode the input data into two elements: a Sparsity Map (SM) and a Non-Zero Value List (NZVL).
This scheme could enhance the overall computation time and energy consumption owing to two main features: 1) its ability to skip over zero-value pixels in the input layers without any wasted clock cycles and redundant MACs
Relatori
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
