Deep Networks for Saliency Detection via Local Estimation and Global Search

被引:0
|
作者
Wang, Lijun [1 ]
Lu, Huchuan [1 ]
Ruan, Xiang [2 ]
Yang, Ming-Hsuan [3 ]
机构
[1] Dalian Univ Technol, Dalian Shi, Liaoning Sheng, Peoples R China
[2] OMRON Corp, Kyoto, Japan
[3] Univ Calif Merced, Merced, CA USA
基金
美国国家科学基金会;
关键词
REGION DETECTION; VISUAL SALIENCY; OBJECTNESS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.
引用
收藏
页码:3183 / 3192
页数:10
相关论文
共 50 条
  • [1] Saliency Detection via Combining Global Shape and Local Cue Estimation
    Qi, Qiang
    Jian, Muwei
    Yin, Yilong
    Dong, Junyu
    Zhang, Wenyin
    Yu, Hui
    [J]. INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, ISCIDE 2017, 2017, 10559 : 325 - 334
  • [2] Salient object detection via local saliency estimation and global homogeneity refinement
    Yeh, Hsin-Ho
    Liu, Keng-Hao
    Chen, Chu-Song
    [J]. PATTERN RECOGNITION, 2014, 47 (04) : 1740 - 1750
  • [3] Saliency Region Detection via Local and Global Features
    Hu, Dan
    Wang, Yan
    Qian, Shaohui
    Yu, Weiyu
    [J]. PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENT COMMUNICATION, 2015, 16 : 117 - 120
  • [4] A Saliency Detection Model Based on Local and Global Kernel Density Estimation
    Jing, Huiyun
    He, Xin
    Han, Qi
    Niu, Xiamu
    [J]. NEURAL INFORMATION PROCESSING, PT I, 2011, 7062 : 164 - +
  • [5] Saliency detection via joint modeling global shape and local consistency
    Qi, Jinqing
    Dong, Shijing
    Huang, Fang
    Lu, Huchuan
    [J]. NEUROCOMPUTING, 2017, 222 : 81 - 90
  • [6] Co-Saliency Detection via Local Prediction and Global Refinement
    Wang, Jun
    Hu, Lei
    Li, Ning
    Tian, Chang
    Zhang, Zhaofeng
    Zeng, Mingyong
    Luo, Zhangkai
    Guan, Huaping
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2019, E102A (04) : 654 - 664
  • [7] ADAPTIVE BACKGROUND SEARCH AND FOREGROUND ESTIMATION FOR SALIENCY DETECTION VIA COMPREHENSIVE AUTOENCODER
    Yan, Ke
    Li, Changyang
    Wang, Xiuying
    Li, Ang
    Yuan, Yuchen
    Kim, Jinman
    Feng, Dagan
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2767 - 2771
  • [8] Saliency Detection via Low-rank Reconstruction from Global to Local
    Li, Ce
    Hu, Zhijia
    Xiao, Limei
    Pan, Zhengrong
    [J]. 2015 CHINESE AUTOMATION CONGRESS (CAC), 2015, : 669 - 673
  • [9] Global seamline networks for orthomosaic generation via local search
    Mills, Steven
    McLeod, Philip
    [J]. ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2013, 75 : 101 - 111
  • [10] Saliency detection integrating global and local information
    Zhang, Ming
    Wu, Yunhe
    Du, Yue
    Fang, Lei
    Pang, Yu
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 53 : 215 - 223