Segmenting Dynamic Objects in 3D from Egocentric Videos
Francesco Borgna
Segmenting Dynamic Objects in 3D from Egocentric Videos.
Rel. Tatiana Tommasi, Chiara Plizzari. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Matematica, 2024
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (121MB) | Preview |
Abstract
With the increasing availability of egocentric wearable devices, there has been a surge in first-person videos, leading to numerous studies aiming to leverage this data. Among these efforts, 3D scene reconstruction stands out as a key area of interest. This process allows for the recreation of the scene where the video was captured, providing invaluable support for the growing field of augmented reality applications. Some egocentric datasets include static 3D scans of recording locations, usually requiring costly hardware or dedicated scans. An alternative approach involves reconstructing the scene directly from video frames using Structure from Motion (SfM) techniques. This method not only captures the motion of the actor and the objects they interact with, including transformations (e.g., slicing a carrot) but also enables the use of any egocentric footage for scene reconstruction, even without physical access to the environment in real life.
However, the task of decomposing dynamic scenes into objects has received limited attention
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
