Gianpietro Battocletti
Reinforcement Learning approach for cooperative UAVs exploration of critical environments.
Rel. Giorgio Guglieri, Simone Godio. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2021
|
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial. Download (24MB) | Preview |
Abstract: |
Unmanned Aircraft Systems (UASs) have become an important and promising field of study in the aerospace industry. Their versatility and efficiency have led them to be used in a considerable number of different applications. Research in this field is constantly increasing their capabilities and with them the number of tasks they are able to perform. For instance, it is only recently that developments in autonomous driving, often supported by artificial intelligence algorithms, have allowed them to work independently from human intervention. This advancement has greatly improved the possibility of using UASs in critical environments where it would be difficult or dangerous for a human to intervene. One of the most challenging problems in this field is the collaborative operation of multiple Unmanned Aerial Vehicles (UAVs) in the same environment to perform a common set of operations. The possibility for a fleet of UAVs to collaborate in the execution of the same task would greatly increase the capabilities of UASs. The ability to work together efficiently would speed up operations in many situations, and allow to specialise each UAV for a specific task. This could open up a whole new set of applications where autonomous UAV fleets could be employed. Different solutions are currently being proposed and studied to address this challenge. In this thesis, a new approach for collaborative exploration of critical environments using a small fleet of UAVs is proposed. The goal is to design an Artificial-Intelligence-based algorithm able to guide an autonomous drone fleet in the exploration of an unknown environment. This kind of task presents several different challenges. In fact, each drone must be capable of moving in space without hitting any obstacle or other drones. At the same time, it has to continue the exploration task - or any other task assigned to it. While performing these tasks, the drones must also communicate with each other in order to coordinate the exploration following a common strategy and share useful information to optimise the execution of the task. All these issues have to be solved and their solutions merged in an organic algorithm. The proposed solution consists of a combined approach of different methods that are merged in an innovative way which allows to exploit the strong points of each of them. Some methods used, like the Artificial Potential Field, have been used for many years in the engineering field and widely studied. Others, like Deep Reinforcement Learning, are far more recent and their capabilities are still being explored and tested. The combination of these methods allows to increase the capabilities of the classical ones, enhancing their capacities beyond those achieved so far. |
---|---|
Relatori: | Giorgio Guglieri, Simone Godio |
Anno accademico: | 2020/21 |
Tipo di pubblicazione: | Elettronica |
Numero di pagine: | 107 |
Soggetti: | |
Corso di laurea: | Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica) |
Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-25 - INGEGNERIA DELL'AUTOMAZIONE |
Aziende collaboratrici: | Politecnico di Torino |
URI: | http://webthesis.biblio.polito.it/id/eprint/19281 |
Modifica (riservato agli operatori) |