Multiview diffusion-based affinity graph learning with good neighbourhoods for salient object detection

被引:0
|
作者
Wang, Fan [1 ]
Wang, Mingxian [2 ]
Peng, Guohua [3 ]
机构
[1] Xian Shiyou Univ, Sch Sci, Xian 710065, Peoples R China
[2] Xian Shiyou Univ, Sch Earth Sci & Engn, Xian 710065, Peoples R China
[3] Northwestern Polytech Univ, Sch Math & Stat, Xian 710129, Peoples R China
基金
中国国家自然科学基金;
关键词
Salient object detection; Affinity graph learning; Neighbourhoods; Multiview handcrafted features; Graph model; ATTENTION;
D O I
10.1007/s10489-024-05847-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Salient object detection is a challenging task in computer vision and has been used to extract valuable information from many real scenarios. The graph-based detection approach has attracted extensive attention because of its high efficiency and stability. Nevertheless, most existing approaches utilize multiview features to construct graph models, resulting in poor performance in extreme scenes. In graph-based models, the graph structure and neighbourhoods play essential roles in salient object detection performance. In this paper, we propose a novel saliency detection approach via multiview diffusion-based affinity learning with good neighbourhoods. The proposed model includes three components: 1) multiview diffusion-based affinity learning to produce a local/global affinity matrix, 2) subspace clustering to choose good neighbourhoods, and 3) an unsupervised graph-based diffusion model to guide saliency detection. The uniqueness of our affinity graph model lies in exploring multiview handcrafted features to identify different underlying salient objects in extreme scenes. Extensive experiments on several standard databases validate the superior performance of the proposed model over other state-of-the-art methods. The experimental results demonstrate that our graph model with multiview handcrafted features is competitive with the outstanding graph models with multiview deep features.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Intensifying graph diffusion-based salient object detection with sparse graph weighting
    Fan Wang
    Guohua Peng
    Multimedia Tools and Applications, 2023, 82 : 34113 - 34127
  • [2] Intensifying graph diffusion-based salient object detection with sparse graph weighting
    Wang, Fan
    Peng, Guohua
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (22) : 34113 - 34127
  • [3] A salient object segmentation framework using diffusion-based affinity learning
    Moradi, Morteza
    Bayat, Farhad
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 168
  • [4] Learning optimal seeds for diffusion-based salient object detection
    Lu, Song
    Mahadevan, Vijay
    Vasconcelos, Nuno
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 2790 - 2797
  • [5] Generic Promotion of Diffusion-Based Salient Object Detection
    Jiang, Peng
    Vasconcelos, Nuno
    Peng, Jingliang
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 217 - 225
  • [6] SOD-diffusion: Salient Object Detection via Diffusion-Based Image Generators
    Zhang, Shuo
    Huang, Jiaming
    Chen, Shizhe
    Wu, Yan
    Hu, Tao
    Liu, Jing
    COMPUTER GRAPHICS FORUM, 2024, 43 (07)
  • [7] Salient object detection via cross diffusion-based compactness on multiple graphs
    Fan Wang
    Guohua Peng
    Multimedia Tools and Applications, 2021, 80 : 15959 - 15976
  • [8] Salient Object Detection via Multi-feature Diffusion-based Method
    Ye Feng
    Hong Siting
    Chen Jiazhen
    Zheng Zihua
    Liu Guanghai
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2018, 40 (05) : 1210 - 1218
  • [9] Salient object detection via cross diffusion-based compactness on multiple graphs
    Wang, Fan
    Peng, Guohua
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (10) : 15959 - 15976
  • [10] Cauchy graph embedding based diffusion model for salient object detection
    Tan, Yihua
    Li, Yansheng
    Chen, Chen
    Yu, Jin-Gang
    Tian, Jinwen
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2016, 33 (05) : 887 - 898