Accelerating Federating Learning via In-Network Processing
Vera Altamore
Accelerating Federating Learning via In-Network Processing.
Rel. Guido Marchetto, Alessio Sacco. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2022
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (7MB) | Preview |
Abstract
The unceasing development of Machine Learning (ML) and the evolution of DeepLearning have revolutionized many application domains, ranging from natural language processing, to video analytics, to biology and medical predictions. The most common approach for ML models training is cloud-centric, so data owners transmit the training data to a public cloud server for processing, where resides more powerful resources. However, this approach is often unfeasible due to privacy laws and restrictions, as well as the burdening of network communications because of the massive quantities of data that need to be transmitted to a distant cloud server. To solve these problems, Google introduced in 2016 the concept of Federated Learning (FL) with the objective of building machine learning models that takes into account security and privacy of data.
In FL, instead of transferring the data to the central servers, the ML model itself is deployed to the individual devices to train on the data, and only the parameters of the trained models are sent to the central ML/DL model for global training
Relatori
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
