TTA-COPE: Test-Time Adaptation for Category-Level Object Pose Estimation

被引:8
|
作者
Lee, Taeyeop [1 ]
Tremblay, Jonathan [2 ]
Blukis, Valts [2 ]
Wen, Bowen [2 ]
Lee, Byeong-Uk [1 ]
Shin, Inkyu [1 ]
Birchfield, Stan [2 ]
Kweon, In So [1 ]
Yoon, Kuk-Jin [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[2] NVIDIA, San Francisco, CA USA
关键词
D O I
10.1109/CVPR52729.2023.02039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Test-time adaptation methods have been gaining attention recently as a practical solution for addressing source-to-target domain gaps by gradually updating the model without requiring labels on the target data. In this paper, we propose a method of test-time adaptation for category-level object pose estimation called TTA-COPE. We design a pose ensemble approach with a self-training loss using pose-aware confidence. Unlike previous unsupervised domain adaptation methods for category-level object pose estimation, our approach processes the test data in a sequential, online manner, and it does not require access to the source domain at runtime. Extensive experimental results demonstrate that the proposed pose ensemble and the self-training loss improve category-level object pose performance during test time under both semi-supervised and unsupervised settings.
引用
收藏
页码:21285 / 21295
页数:11
相关论文
共 50 条
  • [41] GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting
    Di, Yan
    Zhang, Ruida
    Lou, Zhiqiang
    Manhardt, Fabian
    Ji, Xiangyang
    Navab, Nassir
    Tombari, Federico
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6771 - 6781
  • [42] CATRE: Iterative Point Clouds Alignment for Category-Level Object Pose Refinement
    Liu, Xingyu
    Wang, Gu
    Li, Yi
    Ji, Xiangyang
    [J]. COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 499 - 516
  • [43] Category-Level 6D Object Pose Estimation via Cascaded Relation and Recurrent Reconstruction Networks
    Wang, Jiaze
    Chen, Kai
    Dou, Qi
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 4807 - 4814
  • [44] Self-Supervised Category-Level 6D Object Pose Estimation With Optical Flow Consistency
    Zaccaria, Michela
    Manhardt, Fabian
    Di, Yan
    Tombari, Federico
    Aleotti, Jacopo
    Giorgini, Mikhail
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2510 - 2517
  • [45] DualPoseNet: Category-level 6D Object Pose and Size Estimation Using Dual Pose Network with Refined Learning of Pose Consistency
    Lin, Jiehong
    Wei, Zewei
    Li, Zhihao
    Xu, Songcen
    Jia, Kui
    Li, Yuanqing
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 3540 - 3549
  • [46] Single-Stage Keypoint-Based Category-Level Object Pose Estimation from an RGB Image
    Lin, Yunzhi
    Tremblay, Jonathan
    Tyree, Stephen
    Vela, Patricio A.
    Birchfield, Stan
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 1547 - 1553
  • [47] KGNet: Knowledge-Guided Networks for Category-Level 6D Object Pose and Size Estimation
    Meng, Qiwei
    Gu, Jason
    Zhu, Shiqiang
    Liao, Jianfeng
    Jin, Tianlei
    Guo, Fangtai
    Wang, Wen
    Song, Wei
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 6102 - 6108
  • [48] Adversarial imitation learning-based network for category-level 6D object pose estimation
    Sun, Shantong
    Bao, Xu
    Kaushik, Aryan
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (05)
  • [49] Test-Time Personalization with a Transformer for Human Pose Estimation
    Li, Yizhuo
    Hao, Miao
    Di, Zonglin
    Gundavarapu, Nitesh B.
    Wang, Xiaolong
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [50] Category-Level Object Pose Estimation in Heavily Cluttered Scenes by Generalized Two-Stage Shape Reconstructor
    Tatemichi, Hiroki
    Kawanishi, Yasutomo
    Deguchi, Daisuke
    Ide, Ichiro
    Murase, Hiroshi
    [J]. IEEE ACCESS, 2024, 12 : 33440 - 33448