Bekzod Fazilov
A comprehensive analysis of Sparse Matrix by Vector multiplication on FPGA with different compression formats.
Rel. Luciano Lavagno, Filippo Minnella. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2023
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (2MB) | Preview |
|
|
Archive (ZIP) (Documenti_allegati)
- Altro
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (216kB) |
Abstract
Today in many applications the edge devices are used from cloud computing, internet of things (IoT) to manufacturing sectors to monitor, analyse processes through applying machine learning and other algorithms. The edge devices usually run light software’s like quantized and small machine learning algorithms due to their limited performance, energy consumption and memory. The algorithms containing the matrix to vector multiplication operation, especially, the quantized fully connected stage of Neural Networks have weighted matrices which are usually consists of high percentage of zero elements. For the sake of reduction of resource usage, energy consumption and increasing the performance, the only non-zero elements can be used in these operations.
Therefore, the sparse storage formats are helpful in avoiding multiplication operations involving zero elements
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
