On Gaze Deployment to Audio-Visual Cues of Social Interactions

被引:6
|
作者
Boccignone, Giuseppe [1 ]
Cuculo, Vittorio [1 ]
D'Amelio, Alessandro [1 ]
Grossi, Giuliano [1 ]
Lanzarotti, Raffaella [1 ]
机构
[1] Univ Milan, PHuSe Lab, Dipartimento Informat, I-20133 Milan, Italy
来源
IEEE ACCESS | 2020年 / 8卷
关键词
Computational modeling; Task analysis; Visualization; Stochastic processes; Trajectory; Sensors; Animals; Audio-visual attention; gaze models; social interaction; multimodal perception; EYE-MOVEMENTS; VISUAL-ATTENTION; PRIORITY MAPS; MODEL; TIME; PREDICTION; GOAL; EXPLOITATION; MECHANISMS; SALIENCE;
D O I
10.1109/ACCESS.2020.3021211
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Attention supports our urge to forage on social cues. Under certain circumstances, we spend the majority of time scrutinising people, markedly their eyes and faces, and spotting persons that are talking. To account for such behaviour, this article develops a computational model for the deployment of gaze within a multimodal landscape, namely a conversational scene. Gaze dynamics is derived in a principled way by reformulating attention deployment as a stochastic foraging problem. Model simulation experiments on a publicly available dataset of eye-tracked subjects are presented. Results show that the simulated scan paths exhibit similar trends of eye movements of human observers watching and listening to conversational clips in a free-viewing condition.
引用
收藏
页码:161630 / 161654
页数:25
相关论文
共 50 条
  • [41] AUDIO-VISUAL DEVELOPMENTS
    Schwartz, Mortimer
    [J]. JOURNAL OF LEGAL EDUCATION, 1952, 5 (01) : 88 - 95
  • [42] Audio-visual biometrics
    Aleksic, Petar S.
    Katsaggelos, Aggelos K.
    [J]. PROCEEDINGS OF THE IEEE, 2006, 94 (11) : 2025 - 2044
  • [43] The Audio-Visual Reader
    不详
    [J]. JOURNAL OF EDUCATIONAL RESEARCH, 1955, 48 (07): : 552 - 553
  • [44] AUDIO-VISUAL FOR THE PATIENT
    STUTTLE, FL
    [J]. JOURNAL OF BONE AND JOINT SURGERY-AMERICAN VOLUME, 1959, 41 (07): : 1362 - 1362
  • [45] Perceptual thresholds of audio-visual spatial coherence for a variety of audio-visual objects
    Stenzel, Hanne
    Jackson, Philip J. B.
    [J]. 2018 AES INTERNATIONAL CONFERENCE ON AUDIO FOR VIRTUAL AND AUGMENTED REALITY, 2018,
  • [46] An audio-visual speech recognition system for testing new audio-visual databases
    Pao, Tsang-Long
    Liao, Wen-Yuan
    [J]. VISAPP 2006: PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 2, 2006, : 192 - +
  • [47] Identification of Signs of Depression Relapse using Audio-visual Cues: A Preliminary Study
    Muzammel, Muhammad
    Othmani, Alice
    Mukherjee, Himadri
    Salam, Hanan
    [J]. 2021 IEEE 34TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS), 2021, : 62 - 67
  • [48] REVERBERANT SPEECH SEPARATION BASED ON AUDIO-VISUAL DICTIONARY LEARNING AND BINAURAL CUES
    Liu, Qingju
    Wang, Wenwu
    Jackson, Philip
    Barnard, Mark
    [J]. 2012 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2012, : 664 - 667
  • [49] Exploring the effectiveness of auditory, visual, and audio-visual sensory cues in a multiple object tracking environment
    Julia Föcker
    Polly Atkins
    Foivos-Christos Vantzos
    Maximilian Wilhelm
    Thomas Schenk
    Hauke S. Meyerhoff
    [J]. Attention, Perception, & Psychophysics, 2022, 84 : 1611 - 1624
  • [50] Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks
    Suerig, Ralf
    Bottari, Davide
    Roeder, Brigitte
    [J]. MULTISENSORY RESEARCH, 2018, 31 (06) : 556 - 578