2D Amodal Instance Segmentation Guided by 3D Shape Prior

被引:5
|
作者
Li, Zhixuan [1 ,2 ]
Ye, Weining [2 ]
Jiang, Tingting [1 ,2 ]
Huang, Tiejun [2 ]
机构
[1] Peking Univ, Adv Inst Informat Technol, Hangzhou, Peoples R China
[2] Peking Univ, Sch Comp Sci, Natl Engn Res Ctr Visual Technol, Beijing, Peoples R China
来源
关键词
Amodal; Occlusion; Instance segmentation;
D O I
10.1007/978-3-031-19818-2_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Amodal instance segmentation aims to predict the complete mask of the occluded instance, including both visible and invisible regions. Existing 2D AIS methods learn and predict the complete silhouettes of target instances in 2D space. However, masks in 2D space are only some observations and samples from the 3D model in different viewpoints and thus can not represent the real complete physical shape of the instances. With the 2D masks learned, 2D amodal methods are hard to generalize to new viewpoints not included in the training dataset. To tackle these problems, we are motivated by observations that (1) a 2D amodal mask is the projection of a 3D complete model, and (2) the 3D complete model can be recovered and reconstructed from the occluded 2D object instances. This paper builds a bridge to link the 2D occluded instances with the 3D complete models by 3D reconstruction and utilizes 3D shape prior for 2D AIS. To deal with the diversity of 3D shapes, our method is pretrained on large 3D reconstruction datasets for high-quality results. And we adopt the unsupervised 3D reconstruction method to avoid relying on 3D annotations. In this approach, our method can reconstruct 3D models from occluded 2D object instances and generalize to new unseen 2D viewpoints of the 3D object. Experiments demonstrate that our method outperforms all existing 2D AIS methods.
引用
收藏
页码:165 / 181
页数:17
相关论文
共 50 条
  • [21] 3D versus 2D/3D shape descriptors:: A comparative study
    Zaharia, T
    Prêteux, F
    IMAGE PROCESSING: ALGORITHMS AND SYSTEMS III, 2004, 5298 : 47 - 58
  • [22] Conditions for Segmentation of 2D Translations of 3D Objects
    Basah, Shafriza Nisha
    Bab-Hadiashar, Alireza
    Hoseinnezhad, Reza
    IMAGE ANALYSIS AND PROCESSING - ICIAP 2009, PROCEEDINGS, 2009, 5716 : 82 - +
  • [23] ACCURACY OF 2D AND 3D MEASUREMENT OF PLACENTA SHAPE
    Getreuer, Pascal
    Girardi, Theresa
    Li, Yingying
    Salafia, Carolyn
    Dalton, Jeffrey
    Katzman, Philip
    Ruffolo, Luis
    Miller, Richard
    Moye, John
    PLACENTA, 2012, 33 (09) : A22 - A22
  • [24] 3D shape reconstruction from 2D images
    Hirano, Daisuke
    Funayama, Yusuke
    Maekawa, Takashi
    Computer-Aided Design and Applications, 2009, 6 (05): : 701 - 710
  • [25] YOLO2U-Net: Detection-guided 3D instance segmentation for microscopy
    Ziabari, Amirkoushyar
    Rose, Derek C.
    Shirinifard, Abbas
    Solecki, David
    PATTERN RECOGNITION LETTERS, 2024, 181 : 37 - 42
  • [26] 2D Instance-Guided Pseudo-LiDAR Point Cloud for Monocular 3D Object Detection
    Gao, Rui
    Kim, Junoh
    Cho, Kyungeun
    IEEE Access, 2024, 12 : 187813 - 187827
  • [27] Text-Guided Graph Neural Networks for Referring 3D Instance Segmentation
    Huang, Pin-Hao
    Lee, Han-Hung
    Chen, Hwann-Tzong
    Liu, Tyng-Luh
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1610 - 1618
  • [28] Programming 2D/3D shape-shifting with hobbyist 3D printers
    van Manen, Teunis
    Janbaz, Shahram
    Zadpoor, Amir A.
    MATERIALS HORIZONS, 2017, 4 (06) : 1064 - 1069
  • [29] GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud
    Yi, Li
    Zhao, Wang
    Wang, He
    Sung, Minhyuk
    Guibas, Leonidas
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3942 - 3951
  • [30] 3D Point Cloud Instance Segmentation Considering Global Shape Contour Constraints
    Xv, Jiabin
    Deng, Fei
    REMOTE SENSING, 2023, 15 (20)