A Time-Dependent Saliency Model Combining Center and Depth Biases for 2D and 3D Viewing Conditions

被引:17
|
作者
Gautier, J. [1 ]
Le Meur, O. [1 ]
机构
[1] Univ Rennes 1, F-35042 Rennes, France
关键词
Eye movements; Saliency model; Binocular disparity; Stereoscopy; VISUAL-ATTENTION; EYE-MOVEMENTS; SCENE; FEATURES;
D O I
10.1007/s12559-012-9138-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The role of the binocular disparity in the deployment of visual attention is examined in this paper. To address this point, we compared eye tracking data recorded while observers viewed natural images in 2D and 3D conditions. The influence of disparity on saliency, center and depth biases is first studied. Results show that visual exploration is affected by the introduction of the binocular disparity. In particular, participants tend to look first at closer areas in 3D condition and then direct their gaze to more widespread locations. Beside this behavioral analysis, we assess the extent to which state-of-the-art models of bottom-up visual attention predict where observers looked at in both viewing conditions. To improve their ability to predict salient regions, low-level features as well as higher-level foreground/background cues are examined. Results indicate that, consecutively to initial centering response, the foreground feature plays an active role in the early but also middle instants of attention deployments. Importantly, this influence is more pronounced in stereoscopic conditions. It supports the notion of a quasi-instantaneous bottom-up saliency modulated by higher figure/ground processing. Beyond depth information itself, the foreground cue might constitute an early process of "selection for action". Finally, we propose a time-dependent computational model to predict saliency on still pictures. The proposed approach combines low-level visual features, center and depth biases. Its performance outperforms state-of-the-art models of bottom-up attention.
引用
收藏
页码:141 / 156
页数:16
相关论文
共 50 条
  • [21] Face recognition method combining 3D face model with 2D recognition
    Zhao, Minghua
    You, Zhisheng
    Zhao, Yonggang
    Liu, Zhifang
    PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS, 2007, : 655 - +
  • [22] A full 3D time-dependent electromagnetic model for Roebel cables
    Zermeno, Victor M. R.
    Grilli, Francesco
    Sirois, Frederic
    SUPERCONDUCTOR SCIENCE & TECHNOLOGY, 2013, 26 (05):
  • [23] DEPTH GENERATION METHOD FOR 2D TO 3D CONVERSION
    Yu, Fengli
    Liu, Ju
    Ren, Yannan
    Sun, Jiande
    Gao, Yuling
    Liu, Wei
    2011 3DTV CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON), 2011,
  • [24] A new descriptor for 2D depth image indexing and 3D model retrieval
    Chaouch, Mohamed
    Verroust-Blondet, Anne
    2007 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-7, 2007, : 3169 - 3172
  • [25] Pressure correction projection finite element method for the 2D/3D time-dependent thermomicropolar fluid problem
    Ren, Yuhang
    Liu, Demin
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2023, 136 : 136 - 150
  • [26] Time-dependent backgrounds of 2D string theory
    Alexandrov, SY
    Kazakov, VA
    Kostov, IK
    NUCLEAR PHYSICS B, 2002, 640 (1-2) : 119 - 144
  • [27] MagicGS: Combining 2D and 3D Priors for Effective 3D Content Generation
    Wang, Jiayi
    Li, Zhenqiang
    Cao, Yangjie
    Li, Jie
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VI, 2025, 15036 : 357 - 370
  • [28] Time-dependent 2D spacetimes from matrices
    Das, SR
    MODERN PHYSICS LETTERS A, 2005, 20 (28) : 2101 - 2118
  • [29] The 2D time-dependent similarity transformation model as a tool for deformation monitoring
    Dimitrios Ampatzidis
    Christian Gruber
    Vasileios Kampouris
    Acta Geodaetica et Geophysica, 2018, 53 : 81 - 92
  • [30] The 2D time-dependent similarity transformation model as a tool for deformation monitoring
    Ampatzidis, Dimitrios
    Gruber, Christian
    Kampouris, Vasileios
    ACTA GEODAETICA ET GEOPHYSICA, 2018, 53 (01) : 81 - 92