Carlo Migliaccio
Skill learning and task composition from human demonstrations for a collaborative manipulator.
Rel. Marina Indri, Pangcheng David Cen Cheng. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2025
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (11MB) | Preview |
Abstract
Human-robot collaboration (HRC) in the modern industry requires the employment of manipulators that can acquire and reuse skills in a easy way and without domain-specific knowledge. Learning from Demonstration (LfD) offers a practical way to do so, however, real deployments still face some caveats; such as turning raw demonstrations into reliable low-level controllers on hardware, re-parameterizing skills to new object/goal poses. This thesis presents a unified LfD pipeline implementation for the UFACTORY xArm6, a 6DOF collaborative robot (cobo) that allows learning of reusable motor skills from human kinesthetic demonstrations. Such demonstrations can be used to plan more complex manipulation task. The pipeline follows the canonical stages -- demonstration data acquisition, motion encoding, execution, and refinement -- and is developed with three learning methods for low-level skills: Behavioral Cloning (BC), Dynamic Movement Primitives (DMP), and Gaussian Mixture Models with Gaussian Mixture Regression (GMM-GMR).
The Task-dependent parameters are retrieved from a vision subsystem based on RGB-D RealSense D435 camera, enabling skill adaptation to unseen situations without retraining
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
URI
![]() |
Modifica (riservato agli operatori) |
