Luca Ianniello
Inverse Reinforcement Learning for Mastering Long-Horizon Procedural Tasks from Visual Demonstrations.
Rel. Giuseppe Bruno Averta, Andrea Protopapa, Francesca Pistilli. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2025
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (6MB) | Preview |
Abstract
Robotic manipulation represents one of the most challenging domains in robotics, requiring precise coordination and adaptability to complex environments. While reinforcement learning approaches show promise, they face significant limitations in practical applications: reward engineering is prohibitively complex, exploration in high-dimensional spaces is inefficient, and physical robot training requires extensive resources. Imitation Learning (IL), particularly Inverse Reinforcement Learning (IRL), offers an alternative by learning directly from demonstrations rather than explicit reward signals. However, current IRL approaches face several fundamental challenges when applied to robotic manipulation tasks. Long-horizon manipulation tasks with multiple sequential stages are difficult to learn end-to-end due to sparse rewards and temporal complexity.
Additionally, the effectiveness of different visual representation learning architectures for IRL in manipulation contexts remains under-explored, especially when combined with procedural decomposition strategies
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
