Philippe Bich, Chiara Boretti
Dictionary of motion primitives for vision-based navigation using Optical Flow.
Rel. Gianluca Setti, John Baillieul. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2021
|
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (18MB) | Preview |
Abstract: |
In the last twenty years Autonomous Vehicles (AVs) have turned into reality but, despite the technology is becoming increasingly mature, AVs are still only able to reach relatively simple goals in structured environments with large energy consumption. A new generation of more energy efficient systems capable of pursuing complex goals in highly dynamic environments must be created. This is the goal of a MURI Project, sponsored by the U.S. Office of Naval Research (ONR), carried out by Boston University, Massachusetts Institute of Technology and several Australian universities and this is the context in which this thesis came about during the authors' visit to Boston University during the academic year 2020-21. The objective of the work is to develop a dictionary of motion primitives that exploit visual cues coming from sequences of images acquired by a monocular camera in order to safely guide a mobile robot in unknowns environments. From the computation of the optical flow field it is possible to retrieve the values of time-to-transit, a quantity probably computed in the animals' visual cortex, that is used in different steering control laws. In order to improve its estimation, negatively affected by rotational motions, a Sense-Perceive-Act cycle is introduced. After a filtering operation, an estimate of the environment's geometry is obtained thanks to the analysis of the spatial distribution of time-to-transit values and the suitable control law is applied. The controller has to switch between two main motion primitives: the Tau Balancing control law and the Single Wall strategy. The former allows the navigation in different scenarios such as straight corridors and turns when the number of features is sufficiently high and uniformly distributed in the image. The latter is employed in situations characterized by feature sparsity. The entire algorithm has been implemented in the Robotic Operative System (ROS) through nodes written in Python exploiting the OpenCV library. Everything has been tested in Gazebo on a ground vehicle and the simulation results show the ability of the robot to safely navigate in artificial environments (with fixed and a priori defined feature density) as well as in more realistic scenarios (with unknown feature density). In order to understand the performances of the algorithm on a real platform, it has been deployed on a Jackal robot equipped with a MYNT EYE S1030 camera. Several experiments have been done remotely after sharing code with people from the Boston University Robotics Lab on the same UGV equipped with a Stereolabs Zed 2 camera. The results of this testing phase have been compared to the ones obtained through simulations highlighting the effectiveness of the control system developed. |
---|---|
Relatori: | Gianluca Setti, John Baillieul |
Anno accademico: | 2020/21 |
Tipo di pubblicazione: | Elettronica |
Numero di pagine: | 124 |
Soggetti: | |
Corso di laurea: | Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica) |
Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-25 - INGEGNERIA DELL'AUTOMAZIONE |
Aziende collaboratrici: | Boston University |
URI: | http://webthesis.biblio.polito.it/id/eprint/17914 |
Modifica (riservato agli operatori) |