Politecnico di Torino (logo)

Model-Based Reinforcement Learning for Driver Action Prediction

Francesco Scorca

Model-Based Reinforcement Learning for Driver Action Prediction.

Rel. Fabrizio Lamberti. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2022

PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (12MB) | Preview

Advanced Driver Assistance Systems (ADAS) can be significantly improved with effective driver action prediction: predicting driver actions early and accurately can help mitigate the effects of potentially unsafe driving behaviors, avoid possible accidents, and improve vehicle powertrain model predictive control applications. Concerning the interpretation of the term “action”, consistent efforts in the literature focus on the vehicle’s trajectory, while there exist works on the forecast of the driver’s intention (e.g. going straight, turning left, etc.) or pedals pressure. The aim of this project is to develop a system predicting steering wheel angles, accelerator and brake pedals pressures in a fixed time-window, exploiting a sensorless architecture: no additional sensors beyond those already present in the car are required, nor are any biometric readings necessary. The driver’s actions are forecasted through an algorithm based on Artificial Intelligence that combines vehicle dynamics (e.g., lateral/longitudinal acceleration) and the perception of the road and the vehicle’s surroundings. For the outlined purpose we propose a methodology that approaches our time-series forecasting problem by exploiting a Model-Based Reinforcement Learning framework. First, we train an autonomous driving agent in a virtual simulation environment that reproduces the road conditions and driving dynamics of a motor vehicle. The Model-Based paradigm implies that, while interacting with the environment, a model of this is learned through Supervised Learning, creating an imaginary copy of the world. The action prediction system will envision the trained agent that, starting from the current state of the driver and of the environment, will move ahead in time in the learned environment model. The sequences of actions imagined are saved and represent the output of our system: the driver action prediction. The work conducted allows for a two-fold result. First, we show the trade-offs in adopting a model-based approach or model-free one in an industrial driving simulator, evaluated on the sample and compute efficiency of the Reinforcement Learning framework. Second, we present the possibility of obtaining a system capable of a prediction in a time-horizon built upon modules obtained through Model-Based Reinforcement Learning, evaluated on both metrics defined during the work, and metrics common in the literature.

Relators: Fabrizio Lamberti
Academic year: 2022/23
Publication type: Electronic
Number of Pages: 107
Corso di laurea: Corso di laurea magistrale in Data Science And Engineering
Classe di laurea: New organization > Master science > LM-32 - COMPUTER SYSTEMS ENGINEERING
Aziende collaboratrici: SENSOR REPLY S.R.L. CON UNICO SOCIO
URI: http://webthesis.biblio.polito.it/id/eprint/25580
Modify record (reserved for operators) Modify record (reserved for operators)