Depth-aware inverted refinement network for RGB-D salient object detection

被引:5
|
作者
Gao, Lina [1 ]
Liu, Bing [1 ]
Fu, Ping [1 ]
Xu, Mingzhu [2 ]
机构
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Heilongjiang, Peoples R China
[2] Shangdong Univ, Sch Software, Jinan 250101, Shangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Salient object detection; RGB-D image; Inverted refinement; Cross -level multi -modal features; ATTENTION; IMAGE;
D O I
10.1016/j.neucom.2022.11.031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in multi-modal feature fusion boost the development of RGB-D salient object detection (SOD), and many remarkable RGB-D SOD models have been proposed. However, though some existing methods consider fusing the cross-level multi-modal features, they ignore the difference between inter-level having the multi-modal details in convolutional neural networks (CNNs) based RGB-D SOD. Therefore, exploring the correlations and differences of cross-level multi-modal features is a critical issue. In this paper, we present a novel depth-aware inverted refinement network (DAIR) to progressively guide the cross-level multi-modal features through backward propagation, which considerably preserves the different level details with multi-modal cues. Specifically, we innovatively design an end-to-end inverted refinement network to guide cross-level and cross-modal learning for revealing complementary relations of the cross-modal. The inverted refinement network also refines the low-level spatial details by the highlevel global contextual cues. In particular, considering the difference of multi-modal and the effect of depth quality, a depth-aware intensified module (DAIM) is proposed with capturing the paired relationship of the pixel-level and inter-channel for the depth map. It promotes the representative capability of the depth details. Extensive experiments on nine challenging RGB-D SOD datasets demonstrate remarkable performance boosting of our proposed model against the fourteen state-of-the-art (SOTA) RGB-D SOD approaches.
引用
收藏
页码:507 / 522
页数:16
相关论文
共 50 条
  • [41] Triple-Complementary Network for RGB-D Salient Object Detection
    Huang, Rui
    Xing, Yan
    Zou, Yaobin
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2020, 27 : 775 - 779
  • [42] Hierarchical Alternate Interaction Network for RGB-D Salient Object Detection
    Li, Gongyang
    Liu, Zhi
    Chen, Minyu
    Bai, Zhen
    Lin, Weisi
    Ling, Haibin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 3528 - 3542
  • [43] Hybrid-Attention Network for RGB-D Salient Object Detection
    Chen, Yuzhen
    Zhou, Wujie
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (17):
  • [44] DMNet: Dynamic Memory Network for RGB-D Salient Object Detection
    Du, Haishun
    Zhang, Zhen
    Zhang, Minghao
    Qiao, Kangyi
    [J]. DIGITAL SIGNAL PROCESSING, 2023, 142
  • [45] Hierarchical Alternate Interaction Network for RGB-D Salient Object Detection
    Li, Gongyang
    Liu, Zhi
    Chen, Minyu
    Bai, Zhen
    Lin, Weisi
    Ling, Haibin
    [J]. IEEE Transactions on Image Processing, 2021, 30 : 3528 - 3542
  • [46] An adaptive guidance fusion network for RGB-D salient object detection
    Sun, Haodong
    Wang, Yu
    Ma, Xinpeng
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (02) : 1683 - 1693
  • [47] Scale Adaptive Fusion Network for RGB-D Salient Object Detection
    Kong, Yuqiu
    Zheng, Yushuo
    Yao, Cuili
    Liu, Yang
    Wang, He
    [J]. COMPUTER VISION - ACCV 2022, PT III, 2023, 13843 : 608 - 625
  • [48] Salient object detection for RGB-D images by generative adversarial network
    Liu, Zhengyi
    Tang, Jiting
    Xiang, Qian
    Zhao, Peng
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (35-36) : 25403 - 25425
  • [49] An adaptive guidance fusion network for RGB-D salient object detection
    Haodong Sun
    Yu Wang
    Xinpeng Ma
    [J]. Signal, Image and Video Processing, 2024, 18 : 1683 - 1693
  • [50] HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness
    Wu, Zongwei
    Allibert, Guillaume
    Meriaudeau, Fabrice
    Ma, Chao
    Demonceaux, Cedric
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 2160 - 2173