Key Elements for Human-Robot Joint Action

被引:8
|
作者
Clodic, Aurelie [1 ]
Alami, Rachid [1 ]
Chatila, Raja [2 ,3 ]
机构
[1] LAAS, CNRS, 7 Ave Colonel Roche, F-31400 Toulouse, France
[2] Sorbonne Univ, Paris, France
[3] CNRS, Paris, France
关键词
action; joint action; architecture for social robotics; human-robot interaction;
D O I
10.3233/978-1-61499-480-0-23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
For more than a decade, the field of human-robot interaction has generated many valuable contributions of interest to the robotics community at large. The field is vast, going all the way from perception to action and decision. In the same time, research on human-human joint action has become a topic of intense research in cognitive psychology and philosophy, bringing elements and even architecture hints to help our understanding of human-human joint action. In this paper, we would like to analyse some findings from these disciplines and connect them to the human-robot joint action case. This work is for us a first step toward the definition of a framework dedicated to human-robot interaction.
引用
收藏
页码:23 / 33
页数:11
相关论文
共 50 条
  • [11] Investigation of Joint action: Eye Blinking Behavior Improving Human-Robot Collaboration
    Hayashi, Kotaro
    Mizuuchi, Ikuo
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 1133 - 1139
  • [12] Responsive Joint Attention in Human-Robot Interaction
    Pereira, Andre
    Oertel, Catharine
    Fermoselle, Leonor
    Mendelson, Joe
    Gustafson, Joakim
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 1080 - 1087
  • [13] A Horizontal Approach to Communication for Human-Robot Joint Action: Towards Situated and Sustainable Robotics
    Belhassein, Kathleen
    Fernandez Castro, Victor
    Mayima, Amandine
    [J]. CULTURALLY SUSTAINABLE SOCIAL ROBOTICS, 2020, 335 : 204 - 214
  • [14] Legible Action Selection in Human-Robot Collaboration
    Zhu, Huaijiang
    Gabler, Volker
    Wollherr, Dirk
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 354 - 359
  • [15] Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    Nikolaidis, Stefanos
    Ramakrishnan, Ramya
    Gu, Keren
    Shah, Julie
    [J]. PROCEEDINGS OF THE 2015 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'15), 2015, : 189 - 196
  • [16] Neuro-cognitive mechanisms of decision making in joint action: A human-robot interaction study
    Bicho, Estela
    Erlhagen, Wolfram
    Louro, Luis
    Costa e Silva, Eliana
    [J]. HUMAN MOVEMENT SCIENCE, 2011, 30 (05) : 846 - 868
  • [17] Situated Communication for Joint Activity in Human-Robot Teams
    Kruijff, Geert-Jan M.
    Janicek, Miroslav
    Zender, Hendrik
    [J]. IEEE INTELLIGENT SYSTEMS, 2012, 27 (02) : 27 - 35
  • [18] Improving human-robot interaction based on joint attention
    Daglarli, Evren
    Daglarli, Sare Funda
    Gunel, Gulay Oke
    Kose, Hatice
    [J]. APPLIED INTELLIGENCE, 2017, 47 (01) : 62 - 82
  • [19] Improving human-robot interaction based on joint attention
    Evren Dağlarlı
    Sare Funda Dağlarlı
    Gülay Öke Günel
    Hatice Köse
    [J]. Applied Intelligence, 2017, 47 : 62 - 82
  • [20] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706