RGB-D salient object detection with asymmetric cross-modal fusion

被引:0
|
作者
Yu M. [1 ,2 ]
Xing Z.-H. [1 ]
Liu Y. [2 ]
机构
[1] School of Electronic and Information Engineering, Hebei University of Technology, Tianjin
[2] School of Artificial Intelligence, Hebei University of Technology, Tianjin
来源
Kongzhi yu Juece/Control and Decision | 2023年 / 38卷 / 09期
关键词
asymmetric fusion; depth denoising module; global perception module; RGB-D image; salient object detection;
D O I
10.13195/j.kzyjc.2021.2084
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Most RGB-D salient object detection methods use a symmetric structure during the fusion process to perform the same operation on the RGB features and Depth features. This fusion method ignores the difference between the RGB image and the Depth image, which is likely to cause false detection results. In order to solve it, this paper proposes a cross-modal fusion RGB-D salient object detection method based on an asymmetric structure. In this paper, a global perception module (GPM) is designed to extract the global features of RGB images, and a deep denoising module (DDM) is designed to filter out a large amount of noise in low-quality depth images. Then through the asymmetric fusion module designed, we make full use of the difference between the two features differences, use the depth feature to locate salient objects, so as to guide RGB feature fusion and complement the detailed information of salient objects, and use the respective advantages of the two features to form a complement. A large number of experiments are carried out on four publicly available RGB-D salient object detection datasets, and the experimental results verify that the proposed method outperforms the state-of-the-art methods. © 2023 Northeast University. All rights reserved.
引用
收藏
页码:2487 / 2495
页数:8
相关论文
共 30 条
  • [1] Wang J, Huang W C., Object tracking based on saliency and adaptive background constraint, The 39th Chinese Control Conference (CCC), pp. 6533-6538, (2020)
  • [2] Luo W, Yang M, Zheng W., Weakly-supervised semantic segmentation with saliency and incremental supervision updating, Pattern Recognition, 115, (2021)
  • [3] Gao Y, Yu X S, Wu C D, Et al., Automatic optic disc boundary extraction based on saliency object detection and modified local Gaussian distribution fitting model in retinal images, Control and Decision, 34, 1, pp. 151-156, (2019)
  • [4] Zhou J B, Huang W., Saliency object detection and refinement based on low rank matrix recovery, Control and Decision, 36, 7, pp. 1707-1713, (2021)
  • [5] Zhou W J, Chen Y Z, Liu C, Et al., GFNet: Gate fusion network with Res2Net for detecting salient objects in RGB-D images, IEEE Signal Processing Letters, 27, pp. 800-804, (2020)
  • [6] Ju R, Ge L, Geng W J, Et al., Depth saliency based on anisotropic center-surround difference, IEEE International Conference on Image Processing, pp. 1115-1119, (2014)
  • [7] Zhu C B, Li G, Wang W M, Et al., An innovative salient object detection using center-dark channel prior, IEEE International Conference on Computer Vision Workshops, pp. 1509-1515, (2017)
  • [8] Xiao F, Li B, Peng Y M, Et al., Multi-modal weights sharing and hierarchical feature fusion for RGBD salient object detection, IEEE Access, 8, pp. 26602-26611, (2020)
  • [9] Zhang M, Zhang Y, Piao Y, Et al., Feature reintegration over differential treatment: A top-down and adaptive fusion network for RGB-D salient object detection, Proceedings of the 28th ACM International Conference on Multimedia, pp. 4107-4115, (2020)
  • [10] Liu C, Zhou W J, Chen Y Z, Et al., Asymmetric deeply fused network for detecting salient objects in RGB-D images, IEEE Signal Processing Letters, 27, pp. 1620-1624, (2020)