Lorenzo Nikiforos
The Multiply-And-Max/min Neural Paradigm as a Pruning and Training Accelerator.
Rel. Fabio Pareschi, Luciano Prono, Gianluca Setti. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2024
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (2MB) | Preview |
Abstract
Neural networks have revolutionized the field of artificial intelligence, enabling machines to perform complex tasks once exclusive to human cognition. However, large-scale neural networks present significant computational challenges, particularly during training on servers and deployment on embedded devices. The high computational cost and resource demands impede their practical application in low-resources/energy devices. To address this issue, pruning is introduced, which is a technique that systematically removes redundant parameters and has emerged as a promising solution to reduce computational complexity while maintaining performance. This master's thesis explores the effectiveness of a novel layer, Multiply-And-Max/min (MAM), introduced as an alternative to the classical Multiply and Accumulate (MAC) approach, wherein the reduction function is not the sum of all elements but only of the largest and the smallest.
Experimental results demonstrate the efficacy of the MAM-based approach in significantly sparsifying matrices through different pruning techniques, particularly the Global Gradient Pruning (GGP), which achieved, e.g
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
