People Do not Automatically Take the Level-1 Visual Perspective of Humanoid Robot Avatars

被引:0
|
作者
Chengli Xiao
Ya Fan
Jingyu Zhang
Renlai Zhou
机构
[1] Nanjing University,Department of Psychology, School of Social and Behavioral Sciences
关键词
Visual perspective‐taking; Human–robot interaction; Anthropomorphism; Individual differences;
D O I
暂无
中图分类号
学科分类号
摘要
Taking the perspective of others is critical for both human–human and human–robot interactions. Previous studies using the dot perspective task have revealed that people could automatically process what other people can see. In this study, following the classical dot perspective task, we showed that Chinese participants could not automatically process humanoid robot avatars’ perspective when only judging from self-perspective (Experiment 1) or randomly judging between self and avatar’s perspectives (Experiment 2), and people’s anthropomorphism tendency was related to the efficiency but not the automaticity of perspective-taking. These results revealed that human–human and human–robot interactions might be different in the basic visual process, and suggested the anthropomorphism tendency in people as an influential factor in human–robot interaction.
引用
收藏
页码:165 / 176
页数:11
相关论文
共 40 条
  • [21] Developing Child-Robot Interaction Scenarios with a Humanoid Robot to Assist Children with Autism in Developing Visual Perspective Taking Skills
    Wood, L.
    Dautenhahn, K.
    Robins, B.
    Zaraki, A.
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 1055 - 1060
  • [22] Seeing It Both Ways: Using a Double-Cuing Task to Investigate the Role of Spatial Cuing in Level-1 Visual Perspective-Taking
    Michael, John
    Wolf, Thomas
    Letesson, Clement
    Butterfill, Stephen
    Skewes, Joshua
    Hohwy, Jakob
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2018, 44 (05) : 693 - 702
  • [23] Developing a protocol and experimental setup for using a humanoid robot to assist children with autism to develop visual perspective taking skills
    Wood L.J.
    Robins B.
    Lakatos G.
    Syrdal D.S.
    Zaraki A.
    Dautenhahn K.
    [J]. Paladyn, 2019, 10 (01): : 167 - 179
  • [24] Developing Interaction Scenarios with a Humanoid Robot to Encourage Visual Perspective Taking Skills in Children with Autism - Preliminary Proof of Concept Tests
    Robins, Ben
    Dautenhahn, Kerstin
    Wood, Luke
    Zaraki, Abolfazl
    [J]. SOCIAL ROBOTICS, ICSR 2017, 2017, 10652 : 147 - 155
  • [25] Consistency effect in Level-1 visual perspective-taking and cue-validity effect in attentional orienting: Distinguishing the mentalising account from the submentalising account
    Fan, Cong
    Susilo, Tirta
    Low, Jason
    [J]. VISUAL COGNITION, 2021, 29 (01) : 22 - 37
  • [26] Does altercentric interference rely on mentalizing?: Results from two level-1 perspective-taking tasks
    Marshall, Julia
    Gollwitzer, Anton
    Santos, Laurie R.
    [J]. PLOS ONE, 2018, 13 (03):
  • [27] From Robot to Virtual Doppelganger: Impact of Visual Fidelity of Avatars Controlled in Third-Person Perspective on Embodiment and Behavior in Immersive Virtual Environments
    Gorisse, Geoffrey
    Christmann, Olivier
    Houzangbe, Samory
    Richir, Simon
    [J]. FRONTIERS IN ROBOTICS AND AI, 2019, 6
  • [29] The uncanny valley does not interfere with level 1 visual perspective taking
    MacDorman, Karl F.
    Srinivas, Preethi
    Patel, Himalaya
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2013, 29 (04) : 1671 - 1685
  • [30] The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People
    Tidoni, Emmanuele
    Gergondet, Pierre
    Fusco, Gabriele
    Kheddar, Abderrahmane
    Aglioti, Salvatore M.
    [J]. IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2017, 25 (06) : 772 - 781