A Survey: Hardware Neural Architecture Search On FPGA/ASIC
Shulin Deng
A Survey: Hardware Neural Architecture Search On FPGA/ASIC.
Rel. Mario Roberto Casu. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2024
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (1MB) | Preview |
Abstract
Deep learning (DL) systems are revolutionizing technology across various fields. These breakthroughs are driven by the availability of big data, tremendous growth in computational power, advancements in hardware acceleration, and recent algorithmic innovations. The rapid development of deep learning has spurred demand for efficient hardware implementations capable of handling complex neural network architectures. Due to their flexibility and performance efficiency, Field-Programmable Gate Arrays (FPGAs) and Application-Specific Integrated Circuits (ASICs) have become key platforms for accelerating deep learning tasks. To fully harness the potential of these hardware platforms, researchers have turned to Neural Architecture Search (NAS), a promising paradigm that automates the design of optimal neural network architectures optimized for specific hardware constraints.
Recently, integrating hardware awareness into the search loop (i.e., HW-NAS) has attracted many researchers and opened up exciting new research directions
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
