polito.it
Politecnico di Torino (logo)

Scheduling Kubernetes Tasks with Reinforcement Learning

Sonia Matranga

Scheduling Kubernetes Tasks with Reinforcement Learning.

Rel. Alessio Sacco, Guido Marchetto. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2024

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (2MB) | Preview
Abstract:

In the world of cloud services, the growing complexity of distributed applications and the increase in energy consumption necessitate more efficient management of resources. For this reason, orchestrators such as Kubernetes are widely employed to automate the handling of workloads and resource usage, determining moment by moment the most suitable node on which to start a new task. On the other hand, the expanding application of artificial intelligence algorithms, particularly reinforcement learning, opens up new development opportunities. These advancements allow the creation of increasingly autonomous and state-of-the-art systems. This thesis introduces and develops a different approach to scheduling within Kubernetes clusters. Specifically, the proposed scheduler utilizes a Deep Q-Network (DQN) reinforcement-learning algorithm, integrating a custom plugin in the scheduling chain's scoring phase to optimize the distribution of load across available nodes. In developing this innovative and intelligent approach, each RL model has been trained to learn a distinct policy with specific objectives such as load balancing, energy consumption optimization, or node-user latency optimization. The reinforcement-learning algorithm implemented in the plugin dynamically assesses the resources available on cluster nodes and learns to manage them while adhering to user-defined constraints. By assigning a score to each node based on its suitability for hosting new pods, this intelligent approach supports decision-making and serves as a predictive tool for the scheduling system. Over time, this enables the system to continually improve its decisions regarding the optimal distribution of new workloads, in accordance with the learned policy. The implementation has been tested on a Kubernetes Kind environment, allowing for an assessment of the overall performance of the developed system and the effectiveness of the proposed approach. In particular, results shows that our policy, referred to as EC-RL, learned by the agent, proves to be the best choice when the goal is to reduce energy consumption and node-user latency, both compared to the other tested policies and to the default behavior of the Kubernetes scheduler.

Relatori: Alessio Sacco, Guido Marchetto
Anno accademico: 2023/24
Tipo di pubblicazione: Elettronica
Numero di pagine: 88
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: Politecnico di Torino
URI: http://webthesis.biblio.polito.it/id/eprint/31868
Modifica (riservato agli operatori) Modifica (riservato agli operatori)