polito.it
Politecnico di Torino (logo)

Design of optimized architectures for non-linear convolutional neural networks

Giuseppe Aiello

Design of optimized architectures for non-linear convolutional neural networks.

Rel. Maurizio Martina, Guido Masera. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2021

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (13MB) | Preview
Abstract:

Nowadays, Machine Learning (ML) has become one of the most important topics of research because of the massive use into many applications such as self-driving cars, speech recognition, email spam recognition and in particular image recognition and processing. The involving of the ML in digital image processing can be used for different target applications, among them for example there are: medical visualization (to improve the medical imaging), pattern recognition, noise reduction, image enhancement, and so on. In this thesis a new type of neural network (NN) for image processing has been used. In fact, instead of using filters with fixed weights like in standard convolutional layers, this new NN uses space-variant coefficients. This new convolutional layer leads to better change its behaviour depending on the spatial characteristic of the input image. Since the spatial dependence introduces a non-linear behaviour to the layer, a Non-Linear-Convolution (NLC) replaces the standard linear convolution of a CNN. Networks including NLC achieve performance that are comparable or better respect to the canonical Convolutional Networks, moreover, they require fewer layer and less input feature respect to the second one. This thesis works focus on the implementation of the layer of a NLCN into field programmable gate arrays (FPGAs), which are one of the most important platforms to accelerate the ML inference. FPGAs, in fact, bring many advantages such as high parallelism, low power consumption, dedicated optimized hardware for digital signal processing (DSPs) and so on. Unfortunately, all these advantages don’t come without a price. In fact, while the fewer layers of the NLCN respect to the classical CNNs allows to reduce the number of features (which is an optimal solution for embedded accelerators which have very reduced resources), the overall complexity of the single layer of the NLCN is greater respect to the CNNs one. So, the layer design has been designed to achieve a suitable trade-off between memory transaction and computation. Indeed, since the memory usage in the NLCN is mainly related to the space for internal computation, to store the parameters and to save the intermediate data, some techniques are adopted to find an optimal balance between off-chip memory transaction and computation. Among them, there are for example loop tiling and loop unrolling, in which the former allows to reduce the amount of on-chip memory required into FPGAs and the latter allows to speed up the execution of nested loops.

Relatori: Maurizio Martina, Guido Masera
Anno accademico: 2021/22
Tipo di pubblicazione: Elettronica
Numero di pagine: 96
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-29 - INGEGNERIA ELETTRONICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/21027
Modifica (riservato agli operatori) Modifica (riservato agli operatori)