polito.it
Politecnico di Torino (logo)

Robot pose calculation based on Visual Odometry using Optical flow and Depth map

Sara Lucia Contreras Ojeda

Robot pose calculation based on Visual Odometry using Optical flow and Depth map.

Rel. Marcello Chiaberge, Chiara Boretti, Simone Angarano. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2022

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (29MB) | Preview
Abstract:

Visual Odometry (VO) is a technique that allows knowing accurately the position of a robot over time, useful, for instance, for motion tracking, obstacle detection, avoidance, and autonomous navigation. To do these tasks requires the use of images captured by a monocular or stereo camera on a robot. From these images, it is needed to extract features to figure out how the camera is moving. This can be done in three different ways: feature matching, feature tracking, and calculating the Optical Flow. Once the key feature points are found is possible to do a 3D to 3D, 3D to 2D, or 2D to 2D motion estimation. Over the years many implementations of visual odometry have been done, a common denominator is that they need to be specifically fine-tuned to work in different environments and there is needed for prior knowledge of the space to recover all the trajectory done by the camera. To create a more generalized implementation, able to adapt to distinct environments, and improve the accuracy of the pose estimation, deep learning techniques have recently been implemented to overcome the limitations previously mentioned. Convolutional Neural Networks (CNNs) have proven to give good results for artificial vision tasks; however, VO is not a task that has been solved with this technique. On the other hand, CNNs have been able to solve with good results tasks such as feature detection and Optical Flow, these are included in some approaches to VO estimation, obtaining an improvement in the results. Considering this, for the purpose of this work CNNs were used for the estimation of the Optical Flow. This work presents an approach to solving the Visual Odometry problem using Deep-Learning in one of the stages as a tool to calculate the trajectory of a stereo camera in an indoor environment. To achieve this goal there were implemented Convolutional Neural Networks such as RAFT and The Flownet to calculate the optical flow from two consecutive frames, also was calculated the depth map from the right and left camera images of each frame using an OAK-D camera. The aim of this procedure was to extract key feature points from the images over time. The key points of the left image in the first frame were found with a key point feature extractor that in this case was the Fast Algorithm for Corner Detection. Once gotten, the optical flow was used to find the same feature points of the previous left image in the left image of the consecutive frame. Then, from the depth map was obtained the disparity and with this value were located the same key feature points in the right images of the two frames. The key feature points were used to do triangulation and find the 3D points, with them was possible to obtain the transformation matrix that has the information on the pose of the camera along the period of the measure. The proposed method has been implemented with a prototype robot that is in development at the Service Robotic Center of the Politecnico di Torino (PIC4SER), which will have the task of measuring the levels of CO2 in indoor environments with the aim to create an autonomous system capable of purifying the air of these spaces when is needed. The camera was put on the robot and with this system indoors courses were done. The movement of the robot was controlled by a person with a joystick and the odometry was captured using ROS2. The success of the Visual Odometry estimated from the proposed methodology in this work was compared with the odometry of the robot, obtained with ROS2, and plotted using MATLAB

Relatori: Marcello Chiaberge, Chiara Boretti, Simone Angarano
Anno accademico: 2022/23
Tipo di pubblicazione: Elettronica
Numero di pagine: 96
Soggetti:
Corso di laurea: Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-25 - INGEGNERIA DELL'AUTOMAZIONE
Aziende collaboratrici: Politecnico di Torino - PIC4SER
URI: http://webthesis.biblio.polito.it/id/eprint/25439
Modifica (riservato agli operatori) Modifica (riservato agli operatori)