On Gaze Deployment to Audio-Visual Cues of Social Interactions

被引:6
|
作者
Boccignone, Giuseppe [1 ]
Cuculo, Vittorio [1 ]
D'Amelio, Alessandro [1 ]
Grossi, Giuliano [1 ]
Lanzarotti, Raffaella [1 ]
机构
[1] Univ Milan, PHuSe Lab, Dipartimento Informat, I-20133 Milan, Italy
来源
IEEE ACCESS | 2020年 / 8卷
关键词
Computational modeling; Task analysis; Visualization; Stochastic processes; Trajectory; Sensors; Animals; Audio-visual attention; gaze models; social interaction; multimodal perception; EYE-MOVEMENTS; VISUAL-ATTENTION; PRIORITY MAPS; MODEL; TIME; PREDICTION; GOAL; EXPLOITATION; MECHANISMS; SALIENCE;
D O I
10.1109/ACCESS.2020.3021211
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Attention supports our urge to forage on social cues. Under certain circumstances, we spend the majority of time scrutinising people, markedly their eyes and faces, and spotting persons that are talking. To account for such behaviour, this article develops a computational model for the deployment of gaze within a multimodal landscape, namely a conversational scene. Gaze dynamics is derived in a principled way by reformulating attention deployment as a stochastic foraging problem. Model simulation experiments on a publicly available dataset of eye-tracked subjects are presented. Results show that the simulated scan paths exhibit similar trends of eye movements of human observers watching and listening to conversational clips in a free-viewing condition.
引用
收藏
页码:161630 / 161654
页数:25
相关论文
共 50 条
  • [1] Audio-visual integration of emotional cues in song
    Thompson, William Forde
    Russo, Frank A.
    Quinto, Lena
    [J]. COGNITION & EMOTION, 2008, 22 (08) : 1457 - 1470
  • [2] Audio-visual Cues for Cloud Service Monitoring
    Bermbach, David
    Eberhardt, Jacob
    [J]. CLOSER: PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND SERVICES SCIENCE, 2017, : 439 - 446
  • [3] Keeping in time with social and non-social stimuli: Synchronisation with auditory, visual, and audio-visual cues
    Honisch, Juliane J.
    Mane, Prasannajeet
    Golan, Ofer
    Chakrabarti, Bhismadev
    [J]. SCIENTIFIC REPORTS, 2021, 11 (01)
  • [4] Keeping in time with social and non-social stimuli: Synchronisation with auditory, visual, and audio-visual cues
    Juliane J. Honisch
    Prasannajeet Mane
    Ofer Golan
    Bhismadev Chakrabarti
    [J]. Scientific Reports, 11
  • [5] Deep Reinforcement Learning for Audio-Visual Gaze Control
    Lathuiliere, Stephane
    Masse, Benoit
    Mesejo, Pablo
    Horaud, Radu
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1555 - 1562
  • [6] Audio-visual interactions in environment assessment
    Preis, Anna
    Kocinski, Jedrzej
    Hafke-Dys, Honorata
    Wrzosek, Malgorzata
    [J]. SCIENCE OF THE TOTAL ENVIRONMENT, 2015, 523 : 191 - 200
  • [7] Human interaction categorization by using audio-visual cues
    Marin-Jimenez, M. J.
    Munoz-Salinas, R.
    Yeguas-Bolivar, E.
    Perez de la Blanca, N.
    [J]. MACHINE VISION AND APPLICATIONS, 2014, 25 (01) : 71 - 84
  • [8] Audio-visual speech perception without speech cues
    Saldana, HM
    Pisoni, DB
    Fellowes, JM
    Remez, RE
    [J]. ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, 1996, : 2187 - 2190
  • [9] Towards Audio-Visual Cues for Cloud Infrastructure Monitoring
    Bermbach, David
    Eberhardt, Jacob
    [J]. PROCEEDINGS 2016 IEEE INTERNATIONAL CONFERENCE ON CLOUD ENGINEERING (IC2E), 2016, : 218 - 219
  • [10] Vehicle Detection and Classification using Audio-Visual cues
    Piyush, P.
    Rajan, Rajeev
    Mary, Leena
    Koshy, Bino I.
    [J]. 2016 3RD INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 2016, : 732 - 736