Visual saliency estimation by nonlinearly integrating features using region covariances

被引:316
|
作者
Erdem, Erkut [1 ]
Erdem, Aykut [1 ]
机构
[1] Hacettepe Univ, Dept Comp Engn, Ankara, Turkey
来源
JOURNAL OF VISION | 2013年 / 13卷 / 04期
关键词
visual attention; computational saliency model; feature integration; region covariances; MODEL PREDICTS; GUIDED SEARCH; ATTENTION; CLASSIFICATION; ASYMMETRIES; ALLOCATION; SET;
D O I
10.1167/13.4.11
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
To detect visually salient elements of complex natural scenes, computational bottom-up saliency models commonly examine several feature channels such as color and orientation in parallel. They compute a separate feature map for each channel and then linearly combine these maps to produce a master saliency map. However, only a few studies have investigated how different feature dimensions contribute to the overall visual saliency. We address this integration issue and propose to use covariance matrices of simple image features (known as region covariance descriptors in the computer vision community; Tuzel, Porikli, & Meer, 2006) as meta-features for saliency estimation. As low-dimensional representations of image patches, region covariances capture local image structures better than standard linear filters, but more importantly, they naturally provide nonlinear integration of different features by modeling their correlations. We also show that first-order statistics of features could be easily incorporated to the proposed approach to improve the performance. Our experimental evaluation on several benchmark data sets demonstrate that the proposed approach outperforms the state-of-art models on various tasks including prediction of human eye fixations, salient object detection, and image-retargeting.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Deep CNN Features for Visual Saliency Estimation
    Azaza, Aymen
    Douik, Ali
    [J]. 2018 15TH INTERNATIONAL MULTI-CONFERENCE ON SYSTEMS, SIGNALS AND DEVICES (SSD), 2018, : 688 - 692
  • [2] Visual saliency estimation using constraints
    Jian, Meng
    Wu, Lifang
    Jung, Cheolkon
    Fu, Qingtao
    Jia, Ting
    [J]. NEUROCOMPUTING, 2018, 290 : 1 - 11
  • [3] VISUAL SALIENCY ESTIMATION USING SUPPORT VALUE TRANSFORM
    Yang, Weibin
    Fang, Bin
    Tang, Yuan Yan
    Shang, Zhaowei
    Zhao, Hengjun
    [J]. 2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2012), 2012, : 1069 - 1072
  • [4] SalGaze: Personalizing Gaze Estimation using Visual Saliency
    Chang, Zhuoqing
    Di Martino, Matias
    Qiu, Qiang
    Espinosa, Steven
    Sapiro, Guillermo
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 1169 - 1178
  • [5] Automatic Recognition of Cloud Images by Using Visual Saliency Features
    Hu, Xiangyun
    Wang, Yan
    Shan, Jie
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2015, 12 (08) : 1760 - 1764
  • [6] Visual acuity inspired saliency detection by using sparse features
    Fang, Yuming
    Lin, Weisi
    Fang, Zhijun
    Chen, Zhenzhong
    Lin, Chia-Wen
    Deng, Chenwei
    [J]. INFORMATION SCIENCES, 2015, 309 : 1 - 10
  • [7] Saliency detection using multiple region-based features
    Xue, Yinzhu
    Liu, Zhi
    Shi, Ran
    [J]. OPTICAL ENGINEERING, 2011, 50 (05)
  • [8] Appearance-Based Gaze Estimation Using Visual Saliency
    Sugano, Yusuke
    Matsushita, Yasuyuki
    Sato, Yoichi
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (02) : 329 - 341
  • [9] Fused methods for visual saliency estimation
    Danko, Amanda S.
    Lyu, Siwei
    [J]. IMAGE PROCESSING: MACHINE VISION APPLICATIONS VIII, 2015, 9405
  • [10] Images Inpainting Quality Evaluation Using Structural Features and Visual Saliency
    Ma, Shuang
    Liu, Jinhe
    [J]. ADVANCES IN MULTIMEDIA, 2024, 2024