Ruben Berteletti
Evaluating Algorithmic Optimization Strategies for Training CNNs Under Resource Constraints.
Rel. Andrea Calimera, Valentino Peluso. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2022
Abstract: |
With the rise of the IoT paradigm (Internet of Things), the number of web-connected devices is increasing day by day, and with so, the interest in making use of the data they generate. Through the usage of data-driven models, that information can be leveraged to make decisions or perform forecasts providing solutions for a wide range of tasks, such as image classification, object detection, or speech recognition. Having affordable models requires a training procedure, a computationally expensive operation, which most traditionally has an open-form formulation, meaning that it can run for a virtually unlimited period. Although the training is usually performed in GPUs and TPUs, optimized hardware capable of efficiently handling MACs thanks to their parallelization, the energy required is often high, especially when the task of interest becomes harder, due to the need for more sophisticated architecture and/or a larger dataset, causing in turn, higher energy request that may have a relevant environmental impact. Nevertheless, what can be found in literature nowadays is that the vast majority of works are aimed at pushing the performance to a new high by building custom patterns or developing bigger architectures without taking care of the resources required. This will inevitably lead to a bottleneck in the pipeline since the hardware will soon be unable to process such models or data in a reasonable amount of time, reducing on one side the chance to access the most powerful machines for researchers and practitioners, limiting their experiments and causing a lack of inclusiveness, and on the other side, increasing the CO2 emissions associated with the training. However, depending on the task of interest a quick result is often preferred over a more accurate one and may be worth relaxing the pipeline to save resources and energy. Since the training time is proportional to the consumed energy, and in turn to the carbon emissions, the solution proposed in this work is to explore the framework known in the literature as Budgeted Training Scenario adopting the training time as a budget, namely fixing in advance a time-constraint for the execution time, and once it becomes satisfied, the training is interrupted, pursuing the objective of saving resources through a shorter pipeline. Focusing on the image classification task, this work's contribution is then to approach through algorithmic optimization the accuracy obtained when the training is unlimited, self-constraining the process according to different budgets based on the elapsed time, and leveraging the three primary knobs included in a standard pipeline: the data, trying to feed the network only with the most informative samples, the model, scaling down its width, and the hyper-parameters, adopting a budget-aware learning rate scheduler. Evaluating and combining the three knobs with a ResNet18 on the CIFAR10/100 datasets demonstrates that with a budgeted training is possible to approach the performance achieved in the unconstrained pipeline, obtaining the results with a speedup that can go up to 20x admitting a limited loss in accuracy, building an efficient framework suited for all the applications where is crucial to obtain faster results and when the availability of resources is limited. |
---|---|
Relatori: | Andrea Calimera, Valentino Peluso |
Anno accademico: | 2021/22 |
Tipo di pubblicazione: | Elettronica |
Numero di pagine: | 62 |
Informazioni aggiuntive: | Tesi secretata. Fulltext non presente |
Soggetti: | |
Corso di laurea: | Corso di laurea magistrale in Data Science And Engineering |
Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA |
Aziende collaboratrici: | Politecnico di Torino |
URI: | http://webthesis.biblio.polito.it/id/eprint/23566 |
Modifica (riservato agli operatori) |