polito.it
Politecnico di Torino (logo)

IMPLEMENTATION OF AN IMAGE-BASED VISUAL SERVOING SYSTEM ON A PARROT BEBOP 2 UAV

Ilyas M'Gharfaoui

IMPLEMENTATION OF AN IMAGE-BASED VISUAL SERVOING SYSTEM ON A PARROT BEBOP 2 UAV.

Rel. Marcello Chiaberge. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2019

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (21MB) | Preview
Abstract:

IMPLEMENTATION OF AN IMAGE-BASED VISUAL SERVOING SYSTEM ON A PARROT BEBOP 2 UAV The main objective of this proposed thesis is to design and implement two different systems on a Parrot Bebop 2 UAV. The first system is a Human-UAV interaction (HUI) system to take off, fly and land the UAV by using the camera of the ground station machine to detect and identify the operator’s gesture and send different commands to the drone based on the operator’s gesture. A dataset containing different hands postures have been used to train the detection framework to detect the face and hands of the operator. Then, an algorithm is used to interpret the gestures obtained from the detection results, in which each interpreted gesture is equivalent to a flying command. The second system is an Image-Based Visual Servoing IBVS controlling system for sending commands to the UAV in order to track and follow a detected object, in this case a person, by using the monocular camera of the drone. This requires an algorithm that is able to use the detected object geometry and location in the image plane to send commands to the UAV in order to keep the target within a fixed distance and almost in the centre of its Field of View FoV. To do so, a PID controller have been used to calculate the velocity (horizontal, lateral, vertical and angular) to send to the drone. A dataset containing different pedestrians have been used to train the detection framework. To support the detection framework, a tracking framework have been implemented to identify and assign a unique ID to each detected person. In this way, the drone is able to continuously follow the same person even when in the image plane there are more people detected. The system components used (deep neural network detector, tracker framework, HUI and IBVS) are built as nodes under ROS environment. Both systems are verified to work off-board with a ground station machine with the Parrot Bebop 2 drone. In the chapter 1, a description of the Robot Operating System and of the packages used for this project is given. In chapter 2, an introduction to Object Detection and Deep Learning is given with details about Convolutional Neural Network and YOLO object detection system. In chapter 3, a description of the steps taken to implement the Human-UAV interaction system is given. Furthermore, the reasoning behind the interpretation of the gestures by the developed algorithm to send commands is explained. In chapter 4, an introduction to object tracking and a description of the SORT and deep SORT trackers is given. In chapter 5, a description of the steps taken to implement the IBVS system is given. Furthermore, the reasoning behind the design of the PID controller and of the object follower is explained. In chapter 6, a discussion and analysis about the results obtained from both HUI and IBVS systems is given.

Relatori: Marcello Chiaberge
Anno accademico: 2019/20
Tipo di pubblicazione: Elettronica
Numero di pagine: 68
Soggetti:
Corso di laurea: Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-25 - INGEGNERIA DELL'AUTOMAZIONE
Ente in cotutela: Beihang University (CINA)
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/12488
Modifica (riservato agli operatori) Modifica (riservato agli operatori)