Deep Domain Adaptation through Inter-modal Self-supervision
Luca Robbiano
Deep Domain Adaptation through Inter-modal Self-supervision.
Rel. Barbara Caputo, Mirco Planamente, Mohammadreza Loghmani. Politecnico di Torino, Master of science program in Computer Engineering, 2020
|
Preview |
PDF (Tesi_di_laurea)
- Thesis
Licence: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract
Computer vision in robotics makes heavy usage of RGB-D data. However, collecting large manually annotated datasets is extremely time-consuming and therefore costly. A potential solution is to automatically generate synthetic datasets and to use them in order to make predictions on the real data. Nevertheless, the domain shift between the synthetic dataset (source domain) and the real data (target domain) partially invalidates the effectiveness of this solution, yielding an accuracy significantly lower than the one that would be obtained using labelled real data. In order to overcome this issue, multiple domain adaptation methods have been developed. These methods can also be employed in a multimodal scenario like RGB-D, but none of them exploits the existing relationship between modalities.
We propose a novel domain adaptation method which allows reducing the domain shift by forcing the convolutional neural network to learn the connection between RGB and Depth images through a secondary self-supervised task
Relators
Publication type
URI
![]() |
Modify record (reserved for operators) |
