Saliency Hierarchy Modeling via Generative Kernels for Salient Object Detection

被引:8
|
作者
Zhang, Wenhu [1 ]
Zheng, Liangli [2 ]
Wang, Huanyu [3 ]
Wu, Xintian [3 ]
Li, Xi [3 ,4 ,5 ]
机构
[1] Zhejiang Univ, Polytech Inst, Hangzhou, Peoples R China
[2] Zhejiang Univ, Sch Software Technol, Hangzhou, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[4] Zhejiang Univ, Shanghai Inst Adv Study, Hangzhou, Peoples R China
[5] Shanghai AI Lab, Hangzhou, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Salient object detection; Saliency hierarchy modeling; Region-level; Sample-level; Generative kernel; NETWORK;
D O I
10.1007/978-3-031-19815-1_33
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Salient Object Detection (SOD) is a challenging problem that aims to precisely recognize and segment the salient objects. In ground-truth maps, all pixels belonging to the salient objects are positively annotated with the same value. However, the saliency level should be a relative quantity, which varies among different regions in a given sample and different samples. The conflict between various saliency levels and single saliency value in ground-truth, results in learning difficulty. To alleviate the problem, we propose a Saliency Hierarchy Network (SHNet), modeling saliency patterns via generative kernels from two perspectives: region-level and sample-level. Specifically, we construct a Saliency Hierarchy Module to explicitly model saliency levels of different regions in a given sample with the guide of prior knowledge. Moreover, considering the sample-level divergence, we introduce a Hyper Kernel Generator to capture the global contexts and adaptively generate convolution kernels for various inputs. As a result, extensive experiments on five standard benchmarks demonstrate our SHNet outperforms other state-of-the-art methods in both terms of performance and efficiency.
引用
收藏
页码:570 / 587
页数:18
相关论文
共 50 条
  • [21] Fusing Generic Objectness and Visual Saliency for Salient Object Detection
    Chang, Kai-Yueh
    Liu, Tyng-Luh
    Chen, Hwann-Tzong
    Lai, Shang-Hong
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 914 - 921
  • [22] Bayesian salient object detection based on saliency driven clustering
    Zhou, Lei
    Fu, Keren
    Li, Yijun
    Qiao, Yu
    He, XiangJian
    Yang, Jie
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2014, 29 (03) : 434 - 447
  • [23] Saliency bagging: a novel framework for robust salient object detection
    Vivek Kumar Singh
    Nitin Kumar
    The Visual Computer, 2020, 36 : 1423 - 1441
  • [24] Saliency bagging: a novel framework for robust salient object detection
    Singh, Vivek Kumar
    Kumar, Nitin
    VISUAL COMPUTER, 2020, 36 (07): : 1423 - 1441
  • [25] Saliency Boosting: a novel framework to refine salient object detection
    Vivek Kumar Singh
    Nitin Kumar
    Suresh Madhavan
    Artificial Intelligence Review, 2020, 53 : 3731 - 3772
  • [26] DHSNet: Deep Hierarchical Saliency Network for Salient Object Detection
    Liu, Nian
    Han, Junwei
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 678 - 686
  • [27] Saliency Boosting: a novel framework to refine salient object detection
    Singh, Vivek Kumar
    Kumar, Nitin
    Madhavan, Suresh
    ARTIFICIAL INTELLIGENCE REVIEW, 2020, 53 (05) : 3731 - 3772
  • [28] Usage of Saliency Prior Maps for Detection of Salient Object Features
    Rao, V. Sambasiva
    Mounika, V
    Sai, N. Raghavendra
    Kumar, G. Sai Chaitanya
    PROCEEDINGS OF THE 2021 FIFTH INTERNATIONAL CONFERENCE ON I-SMAC (IOT IN SOCIAL, MOBILE, ANALYTICS AND CLOUD) (I-SMAC 2021), 2021, : 819 - 825
  • [29] Saliency Density and Edge Response Based Salient Object Detection
    Jing, Huiyun
    Han, Qi
    He, Xin
    Niu, Xiamu
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2013, E96D (05) : 1243 - 1246
  • [30] Co-saliency Detection via Sparse Reconstruction and Co-salient Object Discovery
    Li, Bo
    Sun, Zhengxing
    Hu, Jiagao
    Xu, Junfeng
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 222 - 232