Depth-Aware Mirror Segmentation

被引:26
|
作者
Mei, Haiyang [1 ]
Dong, Bo [2 ]
Dong, Wen [1 ]
Peers, Pieter [3 ]
Yang, Xin [1 ]
Zhang, Qiang [1 ]
Wei, Xiaopeng [1 ]
机构
[1] Dalian Univ Technol, Dalian, Liaoning, Peoples R China
[2] SRI Int, 333 Ravenswood Ave, Menlo Pk, CA 94025 USA
[3] Coll William & Mary, Williamsburg, VA 23187 USA
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR46437.2021.00306
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel mirror segmentation method that leverages depth estimates from ToF-based cameras as an additional cue to disambiguate challenging cases where the contrast or relation in RGB colors between the mirror reflection and the surrounding scene is subtle. A key observation is that ToF depth estimates do not report the true depth of the mirror surface, but instead return the total length of the reflected light paths, thereby creating obvious depth discontinuities at the mirror boundaries. To exploit depth information in mirror segmentation, we first construct a largescale RGB-D mirror segmentation dataset, which we subsequently employ to train a novel depth-aware mirror segmentation framework. Our mirror segmentation framework first locates the mirrors based on color and depth discontinuities and correlations. Next, our model further refines the mirror boundaries through contextual contrast taking into account both color and depth information. We extensively validate our depth-aware mirror segmentation method and demonstrate that our model outperforms state-of-the-art RGB and RGB-D based methods for mirror segmentation. Experimental results also show that depth is a powerful cue for mirror segmentation.
引用
收藏
页码:3043 / 3052
页数:10
相关论文
共 50 条
  • [31] Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning
    He, Junwen
    Wang, Yifan
    Wang, Lijun
    Lu, Huchuan
    Luo, Bin
    He, Jun-Yan
    Lan, Jin-Peng
    Geng, Yifeng
    Xie, Xuansong
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4088 - 4098
  • [32] Learning depth-aware features for indoor scene understanding
    Chen, Suting
    Shao, Dongwei
    Zhang, Liangchen
    Zhang, Chuang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 42573 - 42590
  • [33] SEMANTIC CONTEXT AND DEPTH-AWARE OBJECT PROPOSAL GENERATION
    Zhang, Haoyang
    He, Xuming
    Porikli, Fatih
    Kneip, Laurent
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 1 - 5
  • [34] Interactive Depth-Aware Effects for Stereo Image Editing
    Abbott, Joshua
    Morse, Bryan
    [J]. 2013 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2013), 2013, : 263 - 270
  • [35] A Depth-Aware Character Generator for 3DTV
    Oh, Juhyun
    Sohn, Kwanghoon
    [J]. IEEE TRANSACTIONS ON BROADCASTING, 2012, 58 (04) : 523 - 532
  • [36] Learning depth-aware features for indoor scene understanding
    Suting Chen
    Dongwei Shao
    Liangchen Zhang
    Chuang Zhang
    [J]. Multimedia Tools and Applications, 2022, 81 : 42573 - 42590
  • [37] Depth-Aware Salient Object Detection and Segmentation via Multiscale Discriminative Saliency Fusion and Bootstrap Learning
    Song, Hangke
    Liu, Zhi
    Du, Huan
    Sun, Guangling
    Le Meur, Olivier
    Ren, Tongwei
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (09) : 4204 - 4216
  • [38] DAGNet: Depth-aware Glass-like objects segmentation via cross-modal attention
    Wan, Yingcai
    Zhao, Qiankun
    Xu, Jiqian
    Wang, Huaizhen
    Fang, Lijin
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 100
  • [39] Deep Image Registration With Depth-Aware Homography Estimation
    Huang, Chenwei
    Pan, Xiong
    Cheng, Jingchun
    Song, Jiajie
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 6 - 10
  • [40] Learning Depth-Aware Deep Representations for Robotic Perception
    Porzi, Lorenzo
    Bulo, Samuel Rota
    Penate-Sanchez, Adrian
    Ricci, Elisa
    Moreno-Noguer, Francesc
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (02): : 468 - 475