polito.it
Politecnico di Torino (logo)

Combining coarse- and fine-grained DNAS for TinyML

Pietro Borgaro

Combining coarse- and fine-grained DNAS for TinyML.

Rel. Daniele Jahier Pagliari, Matteo Risso, Alessio Burrello. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2023

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (7MB) | Preview
Abstract:

Nowadays, Deep Learning represents the go-to-approach to solve recognition and prediction problems in a vast spectrum of application domains, including computer vision, time-series analysis, and natural language processing. For many of these tasks, shifting from a computing paradigm based on the cloud to an edge-centric one where models are deployed directly on IoT nodes provides several benefits, such as predictable response times and improved privacy. However, the execution of complex deep neural networks (DNN) on extreme-edge devices, such as low-power microcontrollers, is complicated by their tight constraints in terms of memory and energy consumption. Therefore, bringing "intelligence" at the IoT edge requires efficient architectures, that minimize the latency/energy consumption required for an inference, without sacrificing output quality (e.g., classification accuracy). Finding these architectures manually with "trial-and-error" is tedious and costly. Therefore, this thesis has explored some efficient automatic optimization algorithms able to explore a vast search space of possible neural network architectures, finding the ones that yield the best accuracy versus complexity trade-off. These methods are often referred to as Neural Architecture Search (NAS) tools. First-generation NAS tools were based on extremely time-consuming reinforcement learning or evolutionary algorithms (up to 1000s of GPU hours for a single search), while a more efficient recent alternative is represented by Differentiable NAS (DNAS), which simultaneously trains the DNN weights and optimizes its architecture, using gradient descent. Within the DNAS domain, two main state-of-the-art approaches have emerged usually referred to as supernet-based DNAS and mask-based DNAS. Supernet-based DNASes are capable to explore different layer alternatives in a coarse-grained manner achieving maximum search flexibility. Conversely, mask-based DNASes are able to optimize hyper-parameters within a specific layer (e.g., n. of channels in convolutions) with fine grain. In particular, this thesis work has focused on the integration of these two DNAS approaches in order to build a comprehensive framework able to operate a lightweight search both flexible and with fine grain. The developed tool is general and applicable across a wide spectrum of applications. In particular, it has been evaluated on some tasks that are relevant for edge AI (image classification, visual wake-word, and speech recognition) taken from the MLPerf Tiny standard benchmark suite.

Relators: Daniele Jahier Pagliari, Matteo Risso, Alessio Burrello
Academic year: 2022/23
Publication type: Electronic
Number of Pages: 80
Subjects:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: New organization > Master science > LM-32 - COMPUTER SYSTEMS ENGINEERING
Aziende collaboratrici: UNSPECIFIED
URI: http://webthesis.biblio.polito.it/id/eprint/26716
Modify record (reserved for operators) Modify record (reserved for operators)