polito.it
Politecnico di Torino (logo)

Neural Architecture Search Techniques for the Optimized Deployment of Temporal Convolutional Networks at the Edge

Matteo Risso

Neural Architecture Search Techniques for the Optimized Deployment of Temporal Convolutional Networks at the Edge.

Rel. Daniele Jahier Pagliari, Alessio Burrello. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering), 2021

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (3MB) | Preview
Abstract:

For many years Recurrent Neural Networks (RNNs) achieved state-of-the-art (SoTA) results in time series analysis, but their large computational complexity makes them ill-suited to be deployed on microcontrollers and, in general, on resource-constrained edge devices. A more viable alternative, towards the efficient deployment of time series related tasks is represented by Temporal Convolutional Networks (TCNs), a particular class of Convolutional Neural Networks (CNNs), achieving comparable results to SoTA RNNs. TCNs offer many advantages from a computational standpoint, resulting in a more hardware-friendly alternative to RNNs. Nevertheless, the optimized deployment of a TCN on a microcontroller-based edge device still requires a careful and time-consuming hand-tuning of the model's hyper-parameters. This tedious process is necessary to achieve a good trade-off among inference accuracy and computational complexity (total number of operations and memory footprint). Nonetheless, due to the extremely large design-space of such architectures, usually the hand-tuning process leads to sub-optimal solutions. This thesis tackles the problem using Neural Architecture Search (NAS), i.e., the automatic tuning of hyper-parameters by means of an optimization algorithm. Specifically, the usage of NAS algorithms is explored in the direction of searching for TCN architectures with hardware-friendly features, i.e., a small memory footprint and/or reduced number of Floating Point OPerations (FLOPs). Two different low-complexity NAS approaches, called MorphNet and Pruning-In-Time (PIT), are applied to a seed TCN, in order to train the network on its task and, jointly, optimize the hyper-parameters that define its architecture, with respect to a specific metric (e.g., the FLOPs). MorphNet is an existing NAS approach from the literature, which is here combined with PIT, a novel and orthogonal light-weight NAS designed during this thesis work. The two methods are applied both independently on the seed network as well as jointly, exploring different setups. This allows to explore the orthogonality of the two NASes, leading to a complete workflow for the optimization of TCNs. This workflow results in new SoTA architectures on the considered time series analysis task and also compresses the seed architecture as much as 99.6%.

Relatori: Daniele Jahier Pagliari, Alessio Burrello
Anno accademico: 2020/21
Tipo di pubblicazione: Elettronica
Numero di pagine: 85
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Elettronica (Electronic Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-29 - INGEGNERIA ELETTRONICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/18048
Modifica (riservato agli operatori) Modifica (riservato agli operatori)