polito.it
Politecnico di Torino (logo)

Vision and inertial data fusion for collaborative robotics

Anna Grosso

Vision and inertial data fusion for collaborative robotics.

Rel. Marcello Chiaberge, Sarah Cosentino. Politecnico di Torino, Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica), 2020

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (30MB) | Preview
Abstract:

Robots capable of engaging in collaborative behaviours with humans, widely known as cobots, are characterized by incredibly complex requirements and are one of today's major challenges in the robotics field. In order to meet the rather strict accuracy requirements needed to ensure human safety and to gather context information useful for intelligent human-robot collaboration, these robots must adequately localize human operators who move freely in the robotic workplaces. In today’s industrial environments, this objective can be achieved by adopting sophisticated sensory devices like lasers, ultrasounds or vision systems. However, human tracking can be particularly difficult in presence of occluding factors that could severely affect vision-based or light-based approaches and in unconstrained conditions like crowded spaces. This thesis analyzes the integration of inertial measurement units and a vision system in order to improve the human localization for collaborative robotics purposes. More in detail, this work first shows how the human upper body can be independently reconstructed by means of an inertial motion capture system and of a stereoscopic vision system. In order to take advantage of both types of sensors, the measurements of these systems are then combined using a two-step Kalman filter fusion algorithm. The approach is first validated by simple calibration movements. Then, some complex movements are considered in order to verify the effectiveness of the framework. In particular, two different categories of movements are experimentally tested: i) short movements where the subject comes back to a rest condition every few seconds and ii) long movements where the subject performs a long motion task without going back to the rest position until the end. Experimental results show that the presence of IMU sensors in addition to cameras can compensate for the typical drift of IMU sensors and effectively improve the spatial perception of the robot. This result could be of great interest not only for direct interaction tasks between humans and robots, but also in the characterization of advanced robotic cells, where human behaviour can be gradually learned and the use of IMU sensors can be finally disregarded, in favour of a pure three-dimensional vision reconstruction.

Relatori: Marcello Chiaberge, Sarah Cosentino
Anno accademico: 2019/20
Tipo di pubblicazione: Elettronica
Numero di pagine: 82
Soggetti:
Corso di laurea: Corso di laurea magistrale in Mechatronic Engineering (Ingegneria Meccatronica)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-25 - INGEGNERIA DELL'AUTOMAZIONE
Ente in cotutela: Waseda university (GIAPPONE)
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/14025
Modifica (riservato agli operatori) Modifica (riservato agli operatori)