polito.it
Politecnico di Torino (logo)

Smart mobile manipulation for human-robot assistive applications

Cesare Luigi Blengini

Smart mobile manipulation for human-robot assistive applications.

Rel. Marina Indri, Pangcheng David Cen Cheng. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2024

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (147MB) | Preview
Abstract:

Within the field of Robotics, human-robot interaction and cooperation are fascinating topics with important real-world applications. The objective is to build robots capable not only of safely sharing the environment with a human but also of actively interacting with the person to jointly accomplish tasks. The main challenges are meeting safety requirements and making the interaction feel as natural as possible, so that the robot can easily integrate in the human workflow. Recent advancements in machine learning have made it easier and more convenient to develop such solutions. They provide powerful computer vision algorithms which require only camera sensors to operate. The present work tries to address these issues by experimenting with a mobile manipulator, the LoCoBot wx250s. The robot is equipped with a mobile base and a 6DOF arm. It can monitor its surroundings with a lidar and a color and depth camera. Its task is to navigate an environment, such as a laboratory or warehouse, to pick up a requested object and to safely hand it to a human. The robot is controlled through the ROS framework, which is the standard for robotic applications. Navigation and arm control are managed through standard ROS packages. Custom solutions leveraging a mix of classical and machine learning algorithms were developed to take care of object recognition, grasp point prediction and object handover. The robot must be able to detect the target object, grasp it in a way that is functional to its subsequent handover and then perform the handover itself. It must be ensured that the human is presented with the safe part of the object. The solution for the grasp point prediction is based on the concept of part affordance, where each part of the object is labeled with its apparent purpose. In this application the focus is on detecting which parts of the object are dangerous or safe to grasp for the human. Since it relies on visual data, affordance detection is tightly linked with object detection, so both independent and joint solutions were tested. The proposed solutions leverage popular algorithms such as YOLO, DeepLab and more. The grasping itself is performed as a simple parallel grasp but more complex solutions have been tested as well. The handover solution combines the Mediapipe framework for hand tracking and the depth camera stream to map the human hand in the 3d space. The robot is then able to extend its arm towards the human hand and release the object once the handover is completed. The present thesis discusses in detail the implementation of the algorithms, the challenges encountered, the possible alternative solutions and the future developments, and it showcases the results on the real robot.

Relatori: Marina Indri, Pangcheng David Cen Cheng
Anno accademico: 2023/24
Tipo di pubblicazione: Elettronica
Numero di pagine: 74
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/30926
Modifica (riservato agli operatori) Modifica (riservato agli operatori)