polito.it
Politecnico di Torino (logo)

Differentiable Neural Architecture Search Algorithms for TinyML benchmarks

Fabio Eterno

Differentiable Neural Architecture Search Algorithms for TinyML benchmarks.

Rel. Daniele Jahier Pagliari, Alessio Burrello, Matteo Risso. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2022

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (13MB) | Preview
Abstract:

Nowadays, Artificial Intelligence (AI), especially in the form of Machine Learning (ML) and Deep Learning (DL), is becoming the go-to approach to solve complex problems in several sectors such as Computer Vision (CV), Speech Recognition, Natural Language Processing (NLP) and many others. Despite the huge effort spent by public and private actors to reach state-of-the-art results, the design of Deep Neural Networks (DNNs) is still a manual process heavily based on empirical rules and heuristics, thus requiring designers with strong expertise. This inspired researchers to define a new set of algorithms called Neural Architecture Search (NAS). NAS algorithms are becoming very popular in the TinyML/TinyDL domain where the choice of the specific ML/DL model structure is of primal importance. Indeed, deploying DNNs on tiny devices (e.g., small microcontrollers, IoT nodes, etc.) requires considering not only the final accuracy reached by the model, but also the hardware constraints in terms of memory footprint, latency, and energy consumption related to the target device. This thesis focuses on the development of a toolkit to facilitate future developers in the training and evaluation phases of a family of state-of-the-art NAS techniques called mask-based Differentiable NAS (DNAS) for TinyML use-cases. First, this thesis pursues the development of FlexNAS i.e., a flexible library for testing and comparing different DNAS techniques. In particular, a complete set of unit-tests has been designed to verify the correct behavior of different steps involved in the search-phase of DNAS. Second, we consider the industrial grade MLPerf-Tiny benchmark suite. The tasks therein represent industrial-relevant use-cases for which it is relevant to explore and measure the trade-offs between accuracy, latency, and energy of DL networks when deployed on embedded devices. This benchmark suite has been originally developed in TensorFlow. Due to the different libraries involved (PyTorch for FlexNAS and TensorFlow for MLPerf Tiny benchmarks), a complete refactor of MLPerf-Tiny scripts has been necessary to make them compatible with the FlexNAS ecosystem. The final library allows the user to easily compare FlexNAS over TinyML datasets, and it is designed to be easily extended to other NAS-able models and benchmarks, constituting a solid basis for future research.

Relatori: Daniele Jahier Pagliari, Alessio Burrello, Matteo Risso
Anno accademico: 2022/23
Tipo di pubblicazione: Elettronica
Numero di pagine: 86
Soggetti:
Corso di laurea: Corso di laurea magistrale in Data Science And Engineering
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: Politecnico di Torino
URI: http://webthesis.biblio.polito.it/id/eprint/24548
Modifica (riservato agli operatori) Modifica (riservato agli operatori)