Reinforcement Learning for Dynamic Stochastic Scheduling
Alessia De Crescenzo
Reinforcement Learning for Dynamic Stochastic Scheduling.
Rel. Paolo Brandimarte, Edoardo Fadda. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Matematica, 2024
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (1MB) | Preview |
Abstract
With the sharp increase of uncertainty and complexity in production processes, dynamic scheduling nowadays plays a strong role in making enterprises more competitive: it is needed to handle real time events, such as machine breakdowns, job arrivals and stochastic processing times. The static job scheduling problem (JSP) is one of the most practically relevant but rather complex scheduling problems, having been proved to be NP hard, and it has been the subject of a significant amount of literature in the operations research field: however, this approach is unrealistic in real-world contexts, where dynamic events such as insertions, cancellations or modifications of orders, machine breakdowns, variation in due dates and processing times are inevitable and drive the realized execution of a static schedule far from its expected outcome and deteriorate the production efficiency seriously.
This work focuses on dynamic scheduling problem in job shops with new job arrivals at stochastic times, aiming at minimizing the penalties for earliness, tardiness and flowtime, according to the just-in-time (JIT) policy,which is based on the idea that early as well as late delivery must be discouraged: a Reinforcement Learning agent-based method for developing a predictive-reactive scheduling strategy is investigated
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
