Valentina Marino
Hardware Acceleration of AdderNet via High-Level Synthesis for FPGA.
Rel. Luciano Lavagno. Politecnico di Torino, Master of science program in Electronic Engineering, 2024
|
Preview |
PDF (Tesi_di_laurea)
- Thesis
Licence: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract
Convolutional Neural Networks (CNNs) are widely used for machine learning tasks but often come with high computational costs due to their reliance on resource- intensive Multiply-ACcumulate (MAC) operations. As a more efficient alternative, AdderNet (AddNN) replaces these MAC operations with simpler Sum-of-Absolute- Difference (SAD) operations, employing an ℓ1-norm-based approach. While this architecture reduces computational expenses, it has not yet achieved the same level of hardware optimization as CNNs, particularly in areas such as effective quantization, accelerator design, and efficient use of FPGA resources like DSP slices. This thesis presents an efficient quantized implementation of the AddNN ResNet20 architecture using an 8-bit fixed-point quantization scheme.
Developed with the Brevitas framework, this approach significantly reduces memory usage and computa- tional overhead, enabling efficient deployment on FPGAs
Publication type
URI
![]() |
Modify record (reserved for operators) |
