OP-Align: Object-Level and Part-Level Alignment for Self-supervised Category-Level Articulated Object Pose Estimation

被引:0
|
作者
Che, Yuchen [1 ]
Furukawa, Ryo [2 ]
Kanezaki, Asako [1 ]
机构
[1] Tokyo Inst Technol, Tokyo, Japan
[2] Accenture Japan Ltd, Tokyo, Japan
来源
关键词
6DOF object pose estimation; Dataset creation; Unsupervised learning;
D O I
10.1007/978-3-031-73226-3_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Category-level articulated object pose estimation focuses on the pose estimation of unknown articulated objects within known categories. Despite its significance, this task remains challenging due to the varying shapes and poses of objects, expensive dataset annotation costs, and complex real-world environments. In this paper, we propose a novel self-supervised approach that leverages a single-frame point cloud to solve this task. Our model consistently generates reconstruction with a canonical pose and joint state for the entire input object, and it estimates object-level poses that reduce overall pose variance and part-level poses that align each part of the input with its corresponding part of the reconstruction. Experimental results demonstrate that our approach significantly outperforms previous self-supervised methods and is comparable to the state-of-the-art supervised methods. To assess the performance of our model in real-world scenarios, we also introduce a new real-world articulated object benchmark dataset (Code and dataset are released at https://github.com/YC-Che/OP-Align.).
引用
收藏
页码:72 / 88
页数:17
相关论文
共 50 条
  • [1] Category-Level Articulated Object Pose Estimation
    Li, Xiaolong
    Wang, He
    Yi, Li
    Guibas, Leonidas
    Abbott, A. Lynn
    Song, Shuran
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3703 - 3712
  • [2] Leveraging SE(3) Equivariance for Self-Supervised Category-Level Object Pose Estimation
    Li, Xiaolong
    Weng, Yijia
    Yi, Li
    Guibas, Leonidas
    Abbott, A. Lynn
    Song, Shuran
    Wang, He
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Robotic Grasp Detection Based on Category-Level Object Pose Estimation With Self-Supervised Learning
    Yu, Sheng
    Zhai, Di-Hua
    Xia, Yuanqing
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2024, 29 (01) : 625 - 635
  • [4] Self-Supervised Category-Level 6D Object Pose Estimation With Optical Flow Consistency
    Zaccaria, Michela
    Manhardt, Fabian
    Di, Yan
    Tombari, Federico
    Aleotti, Jacopo
    Giorgini, Mikhail
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2510 - 2517
  • [5] Self-Supervised Category-Level 6D Object Pose Estimation with Deep Implicit Shape Representation
    Peng, Wanli
    Yan, Jianhang
    Wen, Hongtao
    Sun, Yi
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2082 - 2090
  • [6] Category-Level Object Pose Estimation with Statistic Attention
    Jiang, Changhong
    Mu, Xiaoqiao
    Zhang, Bingbing
    Liang, Chao
    Xie, Mujun
    SENSORS, 2024, 24 (16)
  • [7] iCaps: Iterative Category-Level Object Pose and Shape Estimation
    Deng, Xinke
    Geng, Junyi
    Bretl, Timothy
    Xiang, Yu
    Fox, Dieter
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02): : 1784 - 1791
  • [8] A Visual Navigation Perspective for Category-Level Object Pose Estimation
    Guo, Jiaxin
    Zhong, Fangxun
    Xiong, Rong
    Liu, Yunhui
    Wang, Yue
    Liao, Yiyi
    COMPUTER VISION - ECCV 2022, PT VI, 2022, 13666 : 123 - 141
  • [9] Zero-Shot Category-Level Object Pose Estimation
    Goodwin, Walter
    Vaze, Sagar
    Havoutis, Ioannis
    Posner, Ingmar
    COMPUTER VISION, ECCV 2022, PT XXXIX, 2022, 13699 : 516 - 532
  • [10] Category-Level Metric Scale Object Shape and Pose Estimation
    Lee, Taeyeop
    Lee, Byeong-Uk
    Kim, Myungchul
    Kweon, I. S.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) : 8575 - 8582