polito.it
Politecnico di Torino (logo)

Deploying Deep Learning on FPGA: an assessment of ConvNets performance on Xilinx Zynq MPSoC using Vitis-AI development platform

Gabriele Cuni

Deploying Deep Learning on FPGA: an assessment of ConvNets performance on Xilinx Zynq MPSoC using Vitis-AI development platform.

Rel. Andrea Calimera, Roberto Giorgio Rizzo. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2022

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (2MB) | Preview
Abstract:

Deep neural networks are one of the most promising technologies in the IoT field, nevertheless they require a high number of operations to be executed. IoT application development is often subject to strict limitations in terms of hardware resources, which makes it complex to use deep machine learning techniques on edge devices. Additionally, edge computing often requires low execution times, in order to be suitable for real-time applications. The above contrasting requirements place a challenging technology problem, which can be addressed by deploying deep neural networks on an efficient and optimised hardware. Field programmable gate array (FPGA) can be a viable alternative to GPUs to accelerate deep neural network inference, even on edge devices. In this thesis, we propose an assessment of ConvNets performance, achieved through Vitis-AI on a Zynq UltraScale+ MPSoC. The assessment is done by testing a set of Mobilenets, which has been obtained by varying the width multiplier and the input resolution, on different FPGA configurations, that are obtained by varying the allocated resources to the deep learning processing unit. On one hand the results show a high throughput, which is compatible with real-time application, even on the smallest available FPGA architecture. On the other hand, all models suffer a large accuracy reduction, that is due to the need to use the post-training quantization technique. Although, taking into account, as an upper-bound, the achievable accuracy using a quantization aware-training technique, the results show that the deployment of deep neural networks on FPGA is a viable option.

Relatori: Andrea Calimera, Roberto Giorgio Rizzo
Anno accademico: 2022/23
Tipo di pubblicazione: Elettronica
Numero di pagine: 42
Soggetti:
Corso di laurea: Corso di laurea magistrale in Data Science And Engineering
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/24513
Modifica (riservato agli operatori) Modifica (riservato agli operatori)