Mahmoud Bahmani
Accelerating Transformer Deep Learning Models on FPGAs using High-Level Synthesis.
Rel. Luciano Lavagno. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2021
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (3MB) | Preview |
Abstract
Accelerating Transformer Deep Learning Models on FPGAs using High-Level Synthesis In the current electronic industry, logic synthesis that starts from RTL description has been the superior method to imlong-termigital systems on both FPGAs and application-specific chips. But recently, High-Level Synthesis (HLS) has grown and now is the choice of hardware engineers and designers for the implementation of complex digital systems. High-Level Synthesis or HLS is an automatic process that accepts synthesizable code written using high-level languages such as C, SystemC, OpenCL (Open Computing Language), and C++ and then transforming them into an RTL design. Finally, This design is then implemented on hardware devices such as FPGAs.
FPGA has limited resources of hardware in terms of the logic cell, interconnection which contains wires that are routed to the power supply, clock, and signal nets
Relatori
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
