Multi-scale fusion for RGB-D indoor semantic segmentation

被引:5
|
作者
Jiang, Shiyi [1 ]
Xu, Yang [1 ,2 ]
Li, Danyang [1 ]
Fan, Runze [1 ]
机构
[1] Guizhou Univ, Coll Big Data & Informat Engn, Guiyang 550025, Peoples R China
[2] Guiyang Aluminum Magnesium Design & Res Inst Co L, Guiyang 550009, Peoples R China
来源
SCIENTIFIC REPORTS | 2022年 / 12卷 / 01期
关键词
D O I
10.1038/s41598-022-24836-9
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In computer vision, convolution and pooling operations tend to lose high-frequency information, and the contour details will also disappear with the deepening of the network, especially in image semantic segmentation. For RGB-D image semantic segmentation, all the effective information of RGB and depth image can not be used effectively, while the form of wavelet transform can retain the low and high frequency information of the original image perfectly. In order to solve the information losing problems, we proposed an RGB-D indoor semantic segmentation network based on multi-scale fusion: designed a wavelet transform fusion module to retain contour details, a nonsubsampled contourlet transform to replace the pooling operation, and a multiple pyramid module to aggregate multi-scale information and context global information. The proposed method can retain the characteristics of multi-scale information with the help of wavelet transform, and make full use of the complementarity of high and low frequency information. As the depth of the convolutional neural network increases without losing the multi-frequency characteristics, the segmentation accuracy of image edge contour details is also improved. We evaluated our proposed efficient method on commonly used indoor datasets NYUv2 and SUNRGB-D, and the results showed that we achieved state-of-the-art performance and real-time inference.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Multi-Scale Feature Fusion Saliency Object Detection Based on RGB-D Images
    Wang, Zhen
    Yu, Wanjun
    Chen, Ying
    Computer Engineering and Applications, 2024, 60 (11) : 242 - 250
  • [32] Multi-modal neural networks with multi-scale RGB-T fusion for semantic segmentation
    Lyu, Y.
    Schiopu, I.
    Munteanu, A.
    ELECTRONICS LETTERS, 2020, 56 (18) : 920 - 922
  • [33] RGB-D joint modelling with scene geometric information for indoor semantic segmentation
    Liu, Hong
    Wu, Wenshan
    Wang, Xiangdong
    Qian, Yueliang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (17) : 22475 - 22488
  • [34] RGB-D Semantic Segmentation for Indoor Modeling Using Deep Learning: A Review
    Rached, Ishraq
    Hajji, Rafika
    Landes, Tania
    RECENT ADVANCES IN 3D GEOINFORMATION SCIENCE, 3D GEOINFO 2023, 2024, : 587 - 604
  • [35] RGB-D joint modelling with scene geometric information for indoor semantic segmentation
    Hong Liu
    Wenshan Wu
    Xiangdong Wang
    Yueliang Qian
    Multimedia Tools and Applications, 2018, 77 : 22475 - 22488
  • [36] Review on Indoor RGB-D Semantic Segmentation with Deep Convolutional Neural Networks
    Barchid, Sami
    Mennesson, Jose
    Djeraba, Chaabane
    2021 INTERNATIONAL CONFERENCE ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2021, : 199 - 202
  • [37] RGB-D Gate-guided edge distillation for indoor semantic segmentation
    Zou, Wenbin
    Peng, Yingqing
    Zhang, Zhengyu
    Tian, Shishun
    Li, Xia
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (25) : 35815 - 35830
  • [38] Dual-modal non-local context guided multi-stage fusion for indoor RGB-D semantic segmentation
    Guo, Xiangyu
    Ma, Wei
    Liang, Fangfang
    Mi, Qing
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [39] CMPFFNet: Cross-Modal and Progressive Feature Fusion Network for RGB-D Indoor Scene Semantic Segmentation
    Zhou, Wujie
    Xiao, Yuxiang
    Yan, Weiqing
    Yu, Lu
    IEEE Transactions on Automation Science and Engineering, 2023, : 1 - 11
  • [40] CMPFFNet: Cross-Modal and Progressive Feature Fusion Network for RGB-D Indoor Scene Semantic Segmentation
    Zhou, Wujie
    Xiao, Yuxiang
    Yan, Weiqing
    Yu, Lu
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024, 21 (04) : 5523 - 5533