Gaze in Interspecies Human-Pet Interaction: Some Exploratory Analyses

被引:5
|
作者
Mondeme, Chloe [1 ,2 ,3 ]
机构
[1] CNRS, Lab Triangle, Paris, France
[2] CEFRES, Prague, Czech Republic
[3] CNRS, Ecole Normale Super, 15 Parvis Rene Descartes, F-69007 Lyon, France
关键词
Gaze; interspecies sociality; animals; sequence organization; participation; TURN-TAKING; ORGANIZATION; RESPONSES; DIRECTION; SPEAKING; ANIMALS; SPEECH; BODY; CUES; DOG;
D O I
10.1080/08351813.2023.2272527
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
This article examines video-recorded naturally occurring human-pet interactions during which the animal's gaze is treated by the human as a turn-allocation device. Gaze exchange has been extensively studied as the social phenomenon par excellence, especially by scientific paradigms interested in defining sociality as mutual orientation. Recently, studies in ethology have shown the relevance of doing sequential analyses of gaze exchanges in animal interactions, providing important information on the "monitoring function" of gaze. Less is known about the "regulatory function" (i.e., the effect of animal gaze on the sequential organization of action) in human-animal interspecies interactions. This article aims to fill this gap, by investigating the role of domestic dogs', cats', and horses' gazes on human conversation and courses of action. Findings demonstrate a systematic format: an animal-initiated gaze followed by a verbal turn of the human participant, which evidences the turn-allocational function of the animal's gaze on human participation. Data are in French.
引用
收藏
页码:291 / 310
页数:20
相关论文
共 50 条
  • [41] Gaze tracking for multimodal human-computer interaction
    Stiefelhagen, R
    Yang, J
    1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I - V: VOL I: PLENARY, EXPERT SUMMARIES, SPECIAL, AUDIO, UNDERWATER ACOUSTICS, VLSI; VOL II: SPEECH PROCESSING; VOL III: SPEECH PROCESSING, DIGITAL SIGNAL PROCESSING; VOL IV: MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS - VOL V: STATISTICAL SIGNAL AND ARRAY PROCESSING, APPLICATIONS, 1997, : 2617 - 2620
  • [42] The human amygdala plays an important role in gaze monitoring - A PET study
    Kawashima, R
    Sugiura, M
    Kato, T
    Nakamura, A
    Hatano, K
    Ito, K
    Fukuda, H
    Kojima, S
    Nakamura, K
    BRAIN, 1999, 122 : 779 - 783
  • [43] Internet of Things for Human - Pet Interaction
    Shih, Yung-Sheng
    Samani, Hooman
    Yang, Chan-Yun
    2016 INTERNATIONAL CONFERENCE ON SYSTEM SCIENCE AND ENGINEERING (ICSSE), 2016,
  • [44] Gaze gesture based human robot interaction for laparoscopic surgery
    Fujii, Kenko
    Gras, Gauthier
    Salerno, Antonino
    Yang, Guang-Zhong
    MEDICAL IMAGE ANALYSIS, 2018, 44 : 196 - 214
  • [45] Implementation of Gaze Estimation in Dialogue to Human-Robot Interaction
    Somashekarappa, Vidya
    2022 10TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS, ACIIW, 2022,
  • [46] Gender and gaze gesture recognition for human-computer interaction
    Zhang, Wenhao
    Smith, Melvyn L.
    Smith, Lyndon N.
    Farooq, Abdul
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 149 : 32 - 50
  • [47] A Gaze-Centered Multimodal Approach to Human-Human Social Interaction
    Aydin, Ulku Arslan
    Kalkan, Sinan
    Acarturk, Cengiz
    2017 3RD IEEE INTERNATIONAL CONFERENCE ON CYBERNETICS (CYBCONF), 2017, : 393 - 398
  • [48] Automating Gaze Target Annotation in Human-Robot Interaction
    Cheng, Linlin
    Hindriks, Koen V.
    Belopolsky, Artem V.
    2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, 2024, : 991 - 998
  • [49] Gaze Cueing and the Role of Presence in Human-Robot Interaction
    Friebe, Kassandra
    Samporova, Sabina
    Malinovska, Kristina
    Hoffimann, Matej
    SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 402 - 414
  • [50] Using Gaze Patterns to Infer Human Intention for Human-Robot Interaction
    Li, Kang
    Wu, Jinting
    Zhao, Xiaoguang
    Tan, Min
    2018 13TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2018, : 933 - 938