Depth quality-aware selective saliency fusion for RGB-D image salient object detection

被引:0
|
作者
Wang, Xuehao [1 ]
Li, Shuai [1 ]
Chen, Chenglizhao [2 ]
Hao, Aimin [1 ]
Qin, Hong [3 ]
机构
[1] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing, Peoples R China
[2] Qingdao Univ, Coll Comp Sci & Technol, Qingdao, Peoples R China
[3] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Depth quality assessment; Salient object detection; Selective fusion; ADVERSARIAL NETWORK; SEGMENTATION; MODEL;
D O I
10.1016/j.neucom.2020.12.071
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous RGB-D salient object detection (SOD) methods have widely adopted the deep learning tools to automatically strike a trade-off between RGB and depth (D). The key rationale is to take full advantage of the complementary nature between RGB and D, aiming for a much-improved SOD performance than that of using either of them solely. However, because to the D quality itself usually varies from scene to scene, such fully automatic fusion schemes may not always be helpful for the SOD task. Moreover, as an objective factor, the D quality has long been overlooked by previous work. Thus, this paper proposes a simple yet effective scheme to measure D quality in advance. The key idea is to devise a series of features in accordance with the common attributes of the high-quality D regions. To be more concrete, we advocate to conduct D quality assessments following a multi-scale methodology, which includes low-level edge consistency, mid-level regional uncertainty and high-level model variance. All these components will be computed independently and later be combined with RGB and D saliency cues to guide the selective RGBD fusion. Compared with the SOTA fusion schemes, our method can achieve better fusion result between RGB and D. Specifically, the proposed D quality measurement method is able to achieve steady performance improvements for almost 2.0% averagely. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:44 / 56
页数:13
相关论文
共 50 条
  • [41] Multiscale multilevel context and multimodal fusion for RGB-D salient object detection
    Wu, Junwei
    Zhou, Wujie
    Luo, Ting
    Yu, Lu
    Lei, Jingsheng
    [J]. SIGNAL PROCESSING, 2021, 178
  • [42] Discriminative unimodal feature selection and fusion for RGB-D salient object detection
    Huang, Nianchang
    Luo, Yongjiang
    Zhang, Qiang
    Han, Jungong
    [J]. PATTERN RECOGNITION, 2022, 122
  • [43] RGB-D salient object detection with asymmetric cross-modal fusion
    Yu, Ming
    Xing, Zhang-Hao
    Liu, Yi
    [J]. Kongzhi yu Juece/Control and Decision, 2023, 38 (09): : 2487 - 2495
  • [44] A Novel Edge-Inspired Depth Quality Evaluation Network for RGB-D Salient Object Detection
    Xu, Kun
    Guo, Jichang
    [J]. JOURNAL OF GRID COMPUTING, 2023, 21 (03)
  • [45] A Novel Edge-Inspired Depth Quality Evaluation Network for RGB-D Salient Object Detection
    Kun Xu
    Jichang Guo
    [J]. Journal of Grid Computing, 2023, 21
  • [46] RGB-D salient object detection via deep fusion of semantics and details
    Zhao, Shimin
    Chen, Miaomiao
    Wang, Pengjie
    Cao, Ying
    Zhang, Pingping
    Yang, Xin
    [J]. COMPUTER ANIMATION AND VIRTUAL WORLDS, 2020, 31 (4-5)
  • [47] DVSOD: RGB-D Video Salient Object Detection
    Li, Jingjing
    Ji, Wei
    Wang, Size
    Li, Wenbo
    Cheng, Li
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [48] Deep RGB-D Saliency Detection Without Depth
    Zhang, Yuan-fang
    Zheng, Jiangbin
    Jia, Wenjing
    Huang, Wenfeng
    Li, Long
    Liu, Nian
    Li, Fei
    He, Xiangjian
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 755 - 767
  • [49] DGFNet: Depth-Guided Cross-Modality Fusion Network for RGB-D Salient Object Detection
    Xiao, Fen
    Pu, Zhengdong
    Chen, Jiaqi
    Gao, Xieping
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2648 - 2658
  • [50] Advancing in RGB-D Salient Object Detection: A Survey
    Chen, Ai
    Li, Xin
    He, Tianxiang
    Zhou, Junlin
    Chen, Duanbing
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (17):