polito.it
Politecnico di Torino (logo)

Reinforcement Learning approach for autonomous UAVs path planning and exploration of critical environments

Riccardo Urban

Reinforcement Learning approach for autonomous UAVs path planning and exploration of critical environments.

Rel. Giorgio Guglieri, Simone Godio. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2021

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial.

Download (22MB) | Preview
Abstract:

Unmanned Aircraft Systems (UASs) have become an important and promising field of study in the aerospace industry. Their versatility and efficiency have led them to be used in a considerable number of different applications. Research in this field is constantly increasing their capabilities and with them, the number of tasks they can perform. For instance, recent developments in autonomous driving, often supported by artificial intelligence algorithms, have allowed them to work independently from human intervention. This advancement has greatly improved the possibility of using autonomous UASs in critical environments where it would be difficult or dangerous for a human to intervene. One of the most challenging problems for UAS is the collaborative operation of multiple Unmanned Aerial Vehicles (UAVs) in the same environment to perform a common set of tasks. The possibility for a fleet of UAVs to collaborate in the execution of the same objective would greatly increase the capabilities of UASs. The ability to work together efficiently would speed up operations in many situations, and allow to specialise each UAV for a specific task. This could open up a whole new set of applications where autonomous UAV fleets could be employed. Different solutions are currently being proposed and studied to address this challenge, as will be illustrated below. A new approach for collaborative exploration of critical environments using a small fleet of UAVs is proposed. The goal is to design an algorithm able to guide an autonomous drone fleet in the exploration of an unknown environment. This kind of task presents several different challenges. In fact, each drone must be capable of moving in space without hitting any obstacle or other drones in an efficient way i.e. avoiding already explored areas and crossing paths. At the same time, it has to continue the exploration task or any other task assigned to it. While performing these tasks, the drones must also communicate with each other to coordinate the exploration following a common strategy and share useful information to optimise the execution of the task. All these issues have to be solved and their solutions merged in an organic algorithm. The proposed solution consists of a combined approach of different methods that are merged in an innovative way that allows exploiting the strong points of each of them. Like the Artificial Potential Field, some methods have been used for many years in the engineering field and widely studied. Others, like Deep Reinforcement Learning, are far more recent and their capabilities are still being explored and tested. The combination of these methods allows increasing the efficiency of the "classical" ones, enhancing their capacities beyond those achieved so far.

Relatori: Giorgio Guglieri, Simone Godio
Anno accademico: 2020/21
Tipo di pubblicazione: Elettronica
Numero di pagine: 105
Soggetti:
Corso di laurea: Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-25 - INGEGNERIA DELL'AUTOMAZIONE
Aziende collaboratrici: Politecnico di Torino
URI: http://webthesis.biblio.polito.it/id/eprint/19286
Modifica (riservato agli operatori) Modifica (riservato agli operatori)