Gianmarco Canzonieri
Design and implementation of neural networks on FPGA based on model compression analysis.
Rel. Luciano Lavagno, Marisa López-Vallejo. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2020
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (45MB) | Preview |
Abstract
The increasing amount of data available nowadays has made the use of automatic learning algorithms, also known as machine learning, spread. Machine learning is widely used for applications like speech recognition, natural language processing or robotics. The most popular technique used for these purposes are neural networks. These models require a great amount of computation capacity and until now, GPUs have mainly covered these computations. Recently, field programmable gate arrays (FPGAs) are becoming more common within these applications. The main difference between GPUs and FPGAs is that the latter offer the user the possibility of designing specific hardware instead of using a fixed architecture.
Also, FPGAs offer a great parallel computation capacity as well as low power consumption compared with GPUs
Relatori
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
