FPGA-based Deep LearningInference Acceleration at the Edge
Andrea Casale
FPGA-based Deep LearningInference Acceleration at the Edge.
Rel. Mihai Teodor Lazarescu. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2021
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract
Deep Neural Networks (DNNs) have become the most widely used computational model in the majority of Machine Learning (ML) applications due to the incredible level of accuracy achievable. This result is obtained at the cost of an elevated computational complexity and high memory demand both for training and inference processing, making DNN implementations on systems with a limited amount of resources and stringent energy consumption constrains a challenging task. To address this challenge, exploiting the large amount of parallelism exhibited by such networks represents the solution to optimize the execution of Deep Learning (DL) algorithms. This has motivated over time the development of dedicated accelerators based on different hardware platforms capable of making DNN inference processing at the edge efficient in terms of both latency and energy efficiency.
In the context of low-power embedded applications, the development of application-tailored accelerators based on custom hardware combined with approximate computing methods is the optimal solution for DNN inference processing at the edge, because it allows computationally expensive neural networks to be transformed into smaller and sparse models
Relatori
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
