Politecnico di Torino (logo)

Architectural exploration and efficient FPGA implementation of convolutional neural networks

Rachele Setto

Architectural exploration and efficient FPGA implementation of convolutional neural networks.

Rel. Luciano Lavagno. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2021

PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (4MB) | Preview

Nowadays image recognition algorithms are used in various fields, which go from simple mobile phone face recognition, to detect object from drones but also to land rovers on Mars. Among these algorithms, the Convolution Neural Networks (CNN) are the most used one. Even if their construction and structure is very simple and easy to be understood, their computational cost and memory requirements are nowadays challenging, especially when the network is inferred on FPGAs, which are the most suitable devices for embedded systems and data-centers, due to the low energy consumption. In this thesis work an architecturally optimized CNN is considered as starting point for further data precision optimization. This network is called SkyNet and is the winner of the System Design Contest for low power object detection in the 56th IEEE/ACM Design Automation Conference (DAC-SDC). Given an image, this network is able to detect objects which are present in there. In order to optimize this network, a quantization aware training QAT technique, which consists in reducing the amount of bits on which the network parameters are stored, is adopted. The goal of quantization aware training is to find the best trade-off among memory saving and accuracy reduction: Brevitas, from Xilinx Research Lab, turned out to be a very good library for this purpose. This thesis describes how to use Brevitas to quantize networks (by quantizing SkyNet) and how the quantization is implemented in the library. After the QAT, the model is optimized, synthesized and implemented using the FINN compiler which, as Brevitas, has been developed by the Xilinx Research Lab. This thesis deeply describes the steps to be followed in FINN to implement the network on a target FPGA, starting from the export of the model from Brevitas, then optimizing the model using Transformations functions, and finally inferring the network on a target device, using Vivado HLS and Vivado Design Suite. Furthermore, the mains FINN problems encountered during the development of the quantized network are listed and analyzed, giving partial solutions on how to fix them. In conclusion, a comparison among the initial SkyNet network and its quantized version is reported, highlighting the memory reduction required to store the network parameters.

Relators: Luciano Lavagno
Academic year: 2020/21
Publication type: Electronic
Number of Pages: 90
Corso di laurea: Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering)
Classe di laurea: New organization > Master science > LM-29 - ELECTRONIC ENGINEERING
Aziende collaboratrici: UNSPECIFIED
URI: http://webthesis.biblio.polito.it/id/eprint/17899
Modify record (reserved for operators) Modify record (reserved for operators)