Unsupervised Domain Adaptation for Depth Prediction from Images

被引:49
|
作者
Tonioni, Alessio [1 ]
Poggi, Matteo [1 ]
Mattoccia, Stefano [1 ]
Di Stefano, Luigi [1 ]
机构
[1] Univ Bologna, Dept Comp Sci & Engn, BO-40126 Bologna, Italy
关键词
Training; Reliability; Estimation; Loss measurement; Computer architecture; Prediction algorithms; Deep learning; depth estimation; unsupervised learning; self-supervised learning; domain adaptation; COST AGGREGATION; STEREO; ACCURATE;
D O I
10.1109/TPAMI.2019.2940948
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State-of-the-art approaches to infer dense depth measurements from images rely on CNNs trained end-to-end on a vast amount of data. However, these approaches suffer a drastic drop in accuracy when dealing with environments much different in appearance and/or context from those observed at training time. This domain shift issue is usually addressed by fine-tuning on smaller sets of images from the target domain annotated with depth labels. Unfortunately, relying on such supervised labeling is seldom feasible in most practical settings. Therefore, we propose an unsupervised domain adaptation technique which does not require groundtruth labels. Our method relies only on image pairs and leverages on classical stereo algorithms to produce disparity measurements alongside with confidence estimators to assess upon their reliability. We propose to fine-tune both depth-from-stereo as well as depth-from-mono architectures by a novel confidence-guided loss function that handles the measured disparities as noisy labels weighted according to the estimated confidence. Extensive experimental results based on standard datasets and evaluation protocols prove that our technique can address effectively the domain shift issue with both stereo and monocular depth prediction architectures and outperforms other state-of-the-art unsupervised loss functions that may be alternatively deployed to pursue domain adaptation.
引用
收藏
页码:2396 / 2409
页数:14
相关论文
共 50 条
  • [41] Deep Joint Depth Estimation and Color Correction From Monocular Underwater Images Based on Unsupervised Adaptation Networks
    Ye, Xinchen
    Li, Zheng
    Sun, Baoli
    Wang, Zhihui
    Xu, Rui
    Li, Haojie
    Fan, Xin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (11) : 3995 - 4008
  • [42] Unsupervised domain adaptation with progressive adaptation of subspaces
    Li, Weikai
    Chen, Songcan
    PATTERN RECOGNITION, 2022, 132
  • [43] Unsupervised domain adaptation with progressive adaptation of subspaces
    Li, Weikai
    Chen, Songcan
    Pattern Recognition, 2022, 132
  • [44] Semantic adaptation network for unsupervised domain adaptation
    Zhou, Qiang
    Zhou, Wen'an
    Wang, Shirui
    NEUROCOMPUTING, 2021, 454 : 313 - 323
  • [45] Cluster adaptation networks for unsupervised domain adaptation
    Zhou, Qiang
    Zhou, Wen'an
    Wang, Shirui
    IMAGE AND VISION COMPUTING, 2021, 108
  • [46] Contrastive Adaptation Network for Unsupervised Domain Adaptation
    Kang, Guoliang
    Jiang, Lu
    Yang, Yi
    Hauptmann, Alexander G.
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4888 - 4897
  • [47] Plugging Self-Supervised Monocular Depth into Unsupervised Domain Adaptation for Semantic Segmentation
    Cardace, Adriano
    De Luigi, Luca
    Ramirez, Pierluigi Zama
    Salti, Samuele
    Di Stefano, Luigi
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1999 - 2009
  • [48] Bridging domain spaces for unsupervised domain adaptation
    Na, Jaemin
    Jung, Heechul
    Chang, Hyung Jin
    Hwang, Wonjun
    PATTERN RECOGNITION, 2025, 164
  • [49] Unsupervised Domain Adaptation by Domain Invariant Projection
    Baktashmotlagh, Mahsa
    Harandi, Mehrtash T.
    Lovell, Brian C.
    Salzmann, Mathieu
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 769 - 776
  • [50] An unsupervised monocular image depth prediction algorithm using Fourier domain analysis
    Chen, Lifang
    Tang, Xiaojiao
    IET SIGNAL PROCESSING, 2022, 16 (06) : 619 - 629