Francesco Vaiana
Architectural design of a configurable hardware accelerator for neural network processing.
Rel. Andrea Calimera. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2018
Abstract
Machine learning applications have become widespread over many technological fields. Their application ranges from pattern recognition, like image classification or data mining techniques, to complex human interaction application, like autonomous driving, natural language processing or robotics. A large class of machine learning algorithms are deep neural networks. They are composed by a cascade of several non-linear layers, such as convolutional, activation, down-sampling, and classification layers. The intensive amount of data processing needed to perform inference makes neural network processing not suitable to run over standard processor architectures. This have motivated the research to develop ASIC accelerators that, with a high-parallelism spatial architecture, are able to process multiple data in a more efficient way, enabling neural network processing beside general purpose computing systems.
The goal of this thesis is to design a hardware accelerator for the inference process that enables new kinds of optimization by splitting the neurons array in two different ones, an array of multipliers and an array of accumulators
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Informazioni aggiuntive
Corso di laurea
Classe di laurea
URI
![]() |
Modifica (riservato agli operatori) |
