PanopticDepth: A Unified Framework for Depth-aware Panoptic Segmentation

被引:5
|
作者
Gao, Naiyu [1 ,2 ]
He, Fei [1 ,2 ]
Jia, Jian [1 ,2 ]
Shan, Yanhu [4 ]
Zhang, Haoyang [4 ]
Zhao, Xin [1 ,2 ]
Huang, Kaiqi [1 ,2 ,3 ]
机构
[1] Chinese Acad Sci, Inst Automat, CRISE, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[3] CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai, Peoples R China
[4] Horizon Robot Inc, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00168
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a unified framework for depth-aware panoptic segmentation (DPS), which aims to reconstruct 3D scene with instance-level semantics from one single image. Prior works address this problem by simply adding a dense depth regression head to panoptic segmentation (PS) networks, resulting in two independent task branches. This neglects the mutually-beneficial relations between these two tasks, thus failing to exploit handy instance-level semantic cues to boost depth accuracy while also producing suboptimal depth maps. To overcome these limitations, we propose a unified framework for the DPS task by applying a dynamic convolution technique to both the PS and depth prediction tasks. Specifically, instead of predicting depth for all pixels at a time, we generate instance-specific kernels to predict depth and segmentation masks for each instance. Moreover, leveraging the instance-wise depth estimation scheme, we add additional instance-level depth cues to assist with supervising the depth learning via a new depth loss. Extensive experiments on Cityscapes-DPS and SemKITTI-DPS show the effectiveness and promise of our method. We hope our unified solution to DPS can lead a new paradigm in this area. Code is available at https://github.com/NaiyuGao/PanopticDepth.
引用
收藏
页码:1622 / 1632
页数:11
相关论文
共 50 条
  • [21] Efficient Unet with depth-aware gated fusion for automatic skin lesion segmentation
    Ding, Xiangwen
    Wang, Shengsheng
    [J]. Journal of Intelligent and Fuzzy Systems, 2021, 40 (05): : 9963 - 9975
  • [22] Depth-Aware Image Seam Carving
    Shen, Jianbing
    Wang, Dapeng
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2013, 43 (05) : 1453 - 1461
  • [23] Part-aware Panoptic Segmentation
    de Geus, Daan
    Meletis, Panagiotis
    Lu, Chenyang
    Wen, Xiaoxiao
    Dubbelman, Gijs
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5481 - 5490
  • [24] Depth-Aware Image Colorization Network
    Chu, Wei-Ta
    Hsu, Yu-Ting
    [J]. PROCEEDINGS OF THE 2018 WORKSHOP ON UNDERSTANDING SUBJECTIVE ATTRIBUTES OF DATA, WITH THE FOCUS ON EVOKED EMOTIONS (EE-USAD'18), 2018, : 17 - 23
  • [25] Depth-Aware Video Frame Interpolation
    Bao, Wenbo
    Lai, Wei-Sheng
    Ma, Chao
    Zhang, Xiaoyun
    Gao, Zhiyong
    Yang, Ming-Hsuan
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3698 - 3707
  • [26] Depth-aware image vectorization and editing
    Shufang Lu
    Wei Jiang
    Xuefeng Ding
    Craig S. Kaplan
    Xiaogang Jin
    Fei Gao
    Jiazhou Chen
    [J]. The Visual Computer, 2019, 35 : 1027 - 1039
  • [27] Uncertainty-Aware Panoptic Segmentation
    Sirohi, Kshitij
    Marvi, Sajad
    Buescher, Daniel
    Burgard, Wolfram
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05): : 2629 - 2636
  • [28] Depth-aware image vectorization and editing
    Lu, Shufang
    Jiang, Wei
    Ding, Xuefeng
    Kaplan, Craig S.
    Jin, Xiaogang
    Gao, Fei
    Chen, Jiazhou
    [J]. VISUAL COMPUTER, 2019, 35 (6-8): : 1027 - 1039
  • [29] Panoptic-PartFormer: Learning a Unified Model for Panoptic Part Segmentation
    Li, Xiangtai
    Xu, Shilin
    Yang, Yibo
    Cheng, Guangliang
    Tong, Yunhai
    Tao, Dacheng
    [J]. COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 729 - 747
  • [30] DEPTH-AWARE LAYERED EDGE FOR OBJECT PROPOSAL
    Liu, Jing
    Ren, Tongwei
    Bao, Bing-Kun
    Bei, Jia
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2016,