Test-time Domain Adaptation for Monocular Depth Estimation

被引:2
|
作者
Li, Zhi [1 ,2 ]
Sh, Shaoshuai [1 ]
Schiele, Bernt [1 ]
Dai, Dengxin [1 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Saarland Univ Campus, Saarbrucken, Germany
关键词
D O I
10.1109/ICRA48891.2023.10161304
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Test-time domain adaptation, i.e. adapting source-pretrained models to the test data on-the-fly in a source-free, unsupervised manner, is a highly practical yet very challenging task. Due to the domain gap between source and target data, inference quality on the target domain can drop drastically especially in terms of absolute scale of depth. In addition, unsupervised adaptation can degrade the model performance due to inaccurate pseudo labels. Furthermore, the model can suffer from catastrophic forgetting when errors are accumulated over time. We propose a test-time domain adaptation framework for monocular depth estimation which achieves both stability and adaptation performance by benefiting from both self-training of the supervised branch and pseudo labels from self-supervised branch, and is able to tackle the above problems: our scale alignment scheme aligns the input features between source and target data, correcting the absolute scale inference on the target domain; with pseudo label consistency check, we select confident pixels thus improve pseudo label quality; regularisation and self-training schemes are applied to help avoid catastrophic forgetting. Without requirement of further supervisions on the target domain, our method adapts the source-trained models to the test data with significant improvements over the direct inference results, providing scale-aware depth map outputs that outperform the state-of-the-arts. Code is available at https://github.com/Malefikus/ada-depth.
引用
收藏
页码:4873 / 4879
页数:7
相关论文
共 50 条
  • [31] Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer
    Atapour-Abarghouei, Amir
    Breckon, Toby P.
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2800 - 2810
  • [32] Video Test-Time Adaptation for Action Recognition
    Lin, Wei
    Mirza, Muhammad Jehanzeb
    Kozinski, Mateusz
    Possegger, Horst
    Kuchne, Hilde
    Bischof, Horst
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22952 - 22961
  • [33] DomainAdaptor: A Novel Approach to Test-time Adaptation
    Zhang, Jian
    Qi, Lei
    Shi, Yinghuan
    Gao, Yang
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18925 - 18935
  • [34] Contrastive Domain Adaptation with Test-Time Training for Out-of-Context News Detection
    Gu, Yimeng
    Zhang, Mengqi
    Castro, Ignacio
    Wu, Shu
    Tyson, Gareth
    PATTERN RECOGNITION, 2025, 164
  • [35] In Search of Lost Online Test-Time Adaptation: A Survey
    Wang, Zixin
    Luo, Yadan
    Zheng, Liang
    Chen, Zhuoxiao
    Wang, Sen
    Huang, Zi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1106 - 1139
  • [36] Navigating Continual Test-time Adaptation with Symbiosis Knowledge
    Yang, Xu
    Li, Mogi
    Yin, Jie
    Wei, Kun
    Deng, Cheng
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5326 - 5334
  • [37] Parameter-free Online Test-time Adaptation
    Boudiaf, Malik
    Mueller, Romain
    Ben Ayed, Ismail
    Bertinetto, Luca
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8334 - 8343
  • [38] Improved Self-Training for Test-Time Adaptation
    Ma, Jing
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23701 - 23710
  • [39] Prototypical class-wise test-time adaptation
    Lee, Hojoon
    Lee, Seunghwan
    Jung, Inyoung
    Korea, Sungeun Hong
    PATTERN RECOGNITION LETTERS, 2025, 187 : 49 - 55
  • [40] Efficient Test-Time Model Adaptation without Forgetting
    Niu, Shuaicheng
    Wu, Jiaxiang
    Zhang, Yifan
    Chen, Yaofo
    Zheng, Shijian
    Zhao, Peilin
    Tan, Mingkui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,