Aggregate interactive learning for RGB-D salient object detection

被引:8
|
作者
Wu, Jingyu [1 ]
Sun, Fuming [1 ]
Xu, Rui [1 ]
Meng, Jie [1 ]
Wang, Fasheng [1 ]
机构
[1] Dalian Minzu Univ, Sch Informat & Commun Engn, Dalian 116600, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural network; Salient object detection; Deformable convolution; Feature fusion module; FEATURES; NETWORK;
D O I
10.1016/j.eswa.2022.116614
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The task of salient object detection is to find the most noticeable areas in the image. On the one hand, most of the existing RGB-D saliency object detection requires additional networks to process depth features, and the sub-networks that process depth features rely too much on the RGB network, resulting in higher computational costs. On the other hand, when dealing with multi-scale features, most models tend to produce information loss, the semantic information representation ability is weak, and good detection results cannot be achieved, which limits its practical application. Firstly, this paper proposes a strategy of aggregation and interaction to extract edge features, depth features and salient features, while maintaining local details, fully extracting global information. Secondly, in the learning process of high-level features, depth features and salient features are extracted at the same time, which reduces the complexity of the network and does not require the additional sub-networks. On the other hand, the deformable convolution network is used to solve the multi-scale problem to ensure that more detailed feature information is extracted. Finally, taking advantage of the complementarity between features, using the one-to-one feature fusion module, the problem of information redundancy in the feature fusion process is solved, and the fused features can accurately locate the salient results with clear details. The experimental results on six datasets show that compared with other state-of-the-art algorithms, the algorithm in this paper has excellent performance.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] RGB-D salient object detection: A survey
    Tao Zhou
    Deng-Ping Fan
    Ming-Ming Cheng
    Jianbing Shen
    Ling Shao
    [J]. Computational Visual Media, 2021, 7 : 37 - 69
  • [2] RGB-D salient object detection: A survey
    Tao Zhou
    Deng-Ping Fan
    Ming-Ming Cheng
    Jianbing Shen
    Ling Shao
    [J]. Computational Visual Media, 2021, 7 (01) : 37 - 69
  • [3] RGB-D salient object detection: A survey
    Zhou, Tao
    Fan, Deng-Ping
    Cheng, Ming-Ming
    Shen, Jianbing
    Shao, Ling
    [J]. COMPUTATIONAL VISUAL MEDIA, 2021, 7 (01) : 37 - 69
  • [4] Salient Object Detection in RGB-D Videos
    Mou, Ao
    Lu, Yukang
    He, Jiahao
    Min, Dingyao
    Fu, Keren
    Zhao, Qijun
    [J]. IEEE Transactions on Image Processing, 2024, 33 : 6660 - 6675
  • [5] Calibrated RGB-D Salient Object Detection
    Ji, Wei
    Li, Jingjing
    Yu, Shuang
    Zhang, Miao
    Piao, Yongri
    Yao, Shunyu
    Bi, Qi
    Ma, Kai
    Zheng, Yefeng
    Lu, Huchuan
    Cheng, Li
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9466 - 9476
  • [6] LIANet: Layer Interactive Attention Network for RGB-D Salient Object Detection
    Han, Yibo
    Wang, Liejun
    Du, Anyu
    Jiang, Shaochen
    [J]. IEEE ACCESS, 2022, 10 : 25435 - 25447
  • [7] Bidirectional feature learning network for RGB-D salient object detection
    Niu, Ye
    Zhou, Sanping
    Dong, Yonghao
    Wang, Le
    Wang, Jinjun
    Zheng, Nanning
    [J]. PATTERN RECOGNITION, 2024, 150
  • [8] DVSOD: RGB-D Video Salient Object Detection
    Li, Jingjing
    Ji, Wei
    Wang, Size
    Li, Wenbo
    Cheng, Li
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Advancing in RGB-D Salient Object Detection: A Survey
    Chen, Ai
    Li, Xin
    He, Tianxiang
    Zhou, Junlin
    Chen, Duanbing
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (17):
  • [10] Adaptive Fusion for RGB-D Salient Object Detection
    Wang, Ningning
    Gong, Xiaojin
    [J]. IEEE ACCESS, 2019, 7 : 55277 - 55284