A Time-Dependent Saliency Model Combining Center and Depth Biases for 2D and 3D Viewing Conditions

被引:17
|
作者
Gautier, J. [1 ]
Le Meur, O. [1 ]
机构
[1] Univ Rennes 1, F-35042 Rennes, France
关键词
Eye movements; Saliency model; Binocular disparity; Stereoscopy; VISUAL-ATTENTION; EYE-MOVEMENTS; SCENE; FEATURES;
D O I
10.1007/s12559-012-9138-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The role of the binocular disparity in the deployment of visual attention is examined in this paper. To address this point, we compared eye tracking data recorded while observers viewed natural images in 2D and 3D conditions. The influence of disparity on saliency, center and depth biases is first studied. Results show that visual exploration is affected by the introduction of the binocular disparity. In particular, participants tend to look first at closer areas in 3D condition and then direct their gaze to more widespread locations. Beside this behavioral analysis, we assess the extent to which state-of-the-art models of bottom-up visual attention predict where observers looked at in both viewing conditions. To improve their ability to predict salient regions, low-level features as well as higher-level foreground/background cues are examined. Results indicate that, consecutively to initial centering response, the foreground feature plays an active role in the early but also middle instants of attention deployments. Importantly, this influence is more pronounced in stereoscopic conditions. It supports the notion of a quasi-instantaneous bottom-up saliency modulated by higher figure/ground processing. Beyond depth information itself, the foreground cue might constitute an early process of "selection for action". Finally, we propose a time-dependent computational model to predict saliency on still pictures. The proposed approach combines low-level visual features, center and depth biases. Its performance outperforms state-of-the-art models of bottom-up attention.
引用
收藏
页码:141 / 156
页数:16
相关论文
共 50 条
  • [41] 2D to 3D Video Conversion via Depth Inference
    Kuo, Tien-Ying
    Hsieh, Cheng-Hong
    Wan, Kuan-Hung
    Chen, Yan-Jhu
    INTELLIGENT SYSTEMS AND APPLICATIONS (ICS 2014), 2015, 274 : 1175 - 1183
  • [42] DeCoTR: Enhancing Depth Completion with 2D and 3D Attentions
    Shi, Yunxiao
    Singh, Manish Kumar
    Cai, Hong
    Porikli, Fatih
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 10736 - 10746
  • [43] A time-dependent, 3D model of interstellar hydrogen distribution in the inner heliosphere
    Bzowski, M
    Summanen, T
    Rucinski, D
    Kyrölä, E
    OUTER HELIOSPHERE: THE NEXT FRONTIERS, PROCEEDINGS, 2001, 11 : 129 - 132
  • [44] On a Time-Dependent Fluid-Solid Coupling in 3D with Nonstandard Boundary Conditions
    Surulescu, Christina
    ACTA APPLICANDAE MATHEMATICAE, 2010, 110 (03) : 1087 - 1104
  • [45] Conditions for Segmentation of 2D Translations of 3D Objects
    Basah, Shafriza Nisha
    Bab-Hadiashar, Alireza
    Hoseinnezhad, Reza
    IMAGE ANALYSIS AND PROCESSING - ICIAP 2009, PROCEEDINGS, 2009, 5716 : 82 - +
  • [46] Progressive viewing of 3D model
    Mo, Rong
    Chang, Zhiyong
    Zhang, Yilan
    Zhang, Junbo
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2004, 16 (10): : 1341 - 1345
  • [47] 2D or 3D?
    Mills, R
    COMPUTER-AIDED ENGINEERING, 1996, 15 (08): : 4 - 4
  • [48] Visualization of Parameter Sensitivity of 2D Time-Dependent Flow
    Hanser, Karsten
    Kleine, Ole
    Rieck, Bastian
    Wiebe, Bettina
    Selz, Tobias
    Piatkowski, Marian
    Sagrista, Antoni
    Zheng, Boyan
    Lukacova-Medvidova, Maria
    Craig, George
    Leitte, Heike
    Sadlo, Filip
    ADVANCES IN VISUAL COMPUTING, ISVC 2018, 2018, 11241 : 359 - 370
  • [49] An analysis of the 2D demultiple and the 3D demultiple for a 3D complex model
    Ikelle, LT
    JOURNAL OF SEISMIC EXPLORATION, 2005, 13 (04): : 303 - 321
  • [50] The comparison of accommodative response and ocular movements in viewing 3D and 2D displays
    Cho, Ta-Hsiung
    Chen, Chien-Yu
    Wu, Pei-Jung
    Chen, Kun-Shiang
    Yin, Li-Te
    DISPLAYS, 2017, 49 : 59 - 64