RGB-D Gate-guided edge distillation for indoor semantic segmentation

被引:7
|
作者
Zou, Wenbin [1 ,2 ]
Peng, Yingqing [1 ,2 ]
Zhang, Zhengyu [1 ,2 ]
Tian, Shishun [1 ,2 ]
Li, Xia [1 ,2 ]
机构
[1] Shenzhen Univ, Coll Elect & Informat Engn, Shenzhen 518060, Peoples R China
[2] Shenzhen Univ, Guangdong Prov Key Lab Intelligent Informat Proc, Shenzhen Key Lab Adv Machine Learning & Applicat, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
RGB-D Semantic Segmentation; Edge Distillation; Gate; Deep Learning;
D O I
10.1007/s11042-021-11395-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fusing the RGB and depth information can significantly improve the performance of semantic segmentation since the depth data represents the geometric information. In this paper, we propose a novel Gate-guided Edge Distillation (GED) based approach to effectively generate edge-aware features by fusing the RGB and depth data, assisting the high-level semantic prediction. The proposed GED consists of two modules: gated fusion and edge distillation. The gated fusion module adaptively learns the relationship between RGB and depth data to generate complementary features. To address the adverse effects caused by redundant information of edge-aware features, edge distillation module enhances the semantic features of the same object while preserving the discrimination of the semantic features belonging to different objects. Besides, by using distilled edge-aware features as detailed guidance, the proposed edge-guided fusion module effectively fuses with semantic features. In addition, the complementary features are leveraged in multi-level feature fusion module to further enhance detailed information. Extensive experiments on the widely used SUN-RGBD and NYU-Dv2 datasets demonstrate that the proposed approach with ResNet-50 achieves state-of-the-art performance.
引用
收藏
页码:35815 / 35830
页数:16
相关论文
共 50 条
  • [1] RGB-D Gate-guided edge distillation for indoor semantic segmentation
    Wenbin Zou
    Yingqing Peng
    Zhengyu Zhang
    Shishun Tian
    Xia Li
    [J]. Multimedia Tools and Applications, 2022, 81 : 35815 - 35830
  • [2] DEPTH REMOVAL DISTILLATION FOR RGB-D SEMANTIC SEGMENTATION
    Fang, Tiyu
    Liang, Zhen
    Shao, Xiuli
    Dong, Zihao
    Li, Jinping
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2405 - 2409
  • [3] Accurate semantic segmentation of RGB-D images for indoor navigation
    Sharan, Sudeep
    Nauth, Peter
    Dominguez-Jimenez, Juan-Jose
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [4] Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis
    Seichter, Daniel
    Koehler, Mona
    Lewandowski, Benjamin
    Wengefeld, Tim
    Gross, Horst-Michael
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13525 - 13531
  • [5] RGB×D: Learning depth-weighted RGB patches for RGB-D indoor semantic segmentation
    Cao, Jinming
    Leng, Hanchao
    Cohen-Or, Daniel
    Lischinski, Dani
    Chen, Ying
    Tu, Changhe
    Li, Yangyan
    [J]. Neurocomputing, 2021, 462 : 568 - 580
  • [6] Multi-scale fusion for RGB-D indoor semantic segmentation
    Jiang, Shiyi
    Xu, Yang
    Li, Danyang
    Fan, Runze
    [J]. SCIENTIFIC REPORTS, 2022, 12 (01):
  • [7] Multi-scale fusion for RGB-D indoor semantic segmentation
    Shiyi Jiang
    Yang Xu
    Danyang Li
    Runze Fan
    [J]. Scientific Reports, 12 (1)
  • [8] RGB-D indoor semantic segmentation network based on wavelet transform
    Runze Fan
    Yuhong Liu
    Shiyi Jiang
    Rongfen Zhang
    [J]. Evolving Systems, 2023, 14 : 981 - 991
  • [9] RGB-D indoor semantic segmentation network based on wavelet transform
    Fan, Runze
    Liu, Yuhong
    Jiang, Shiyi
    Zhang, Rongfen
    [J]. EVOLVING SYSTEMS, 2023, 14 (06) : 981 - 991
  • [10] SEMANTICS-GUIDED MULTI-LEVEL RGB-D FEATURE FUSION FOR INDOOR SEMANTIC SEGMENTATION
    Li, Yabei
    Zhang, Junge
    Cheng, Yanhua
    Huang, Kaiqi
    Tan, Tieniu
    [J]. 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1262 - 1266