Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction

被引:13
|
作者
Giuliani, Manuel [1 ]
Knoll, Alois [2 ]
机构
[1] Fortiss GmbH, D-80805 Munich, Germany
[2] Tech Univ Munich, D-85748 Garching, Germany
关键词
Human-robot interaction; Embodied multimodal fusion; Robot roles; ROLE ASSIGNMENT;
D O I
10.1007/s12369-013-0194-y
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We present a robot that is working with humans on a common construction task. In this kind of interaction, it is important that the robot can perform different roles in order to realise an efficient collaboration. For this, we introduce embodied multimodal fusion, a new approach for processing data from the robot's input modalities. Using this method, we implemented two different robot roles: the robot can take the instructive role, in which the robot mainly instructs the user how to proceed with the construction; or the robot can take the supportive role, in which the robot hands over assembly pieces to the human that fit to the current progress of the assembly plan. We present a user evaluation that researches how humans react to the different roles of the robot. The main findings of this evaluation are that the users do not prefer one of the two roles of the robot, but take the counterpart to the robot's role and adjust their own behaviour according to the robot's actions. The most influential factors for user satisfaction in this kind of interaction are the number of times the users picked up a building piece without getting an explicit instruction by the robot, which had a positive influence, and the number of utterances the users made themselves, which had a negative influence.
引用
收藏
页码:345 / 356
页数:12
相关论文
共 50 条
  • [1] Using Embodied Multimodal Fusion to Perform Supportive and Instructive Robot Roles in Human-Robot Interaction
    Manuel Giuliani
    Alois Knoll
    [J]. International Journal of Social Robotics, 2013, 5 : 345 - 356
  • [2] Multimodal fusion and human-robot interaction control of an intelligent robot
    Gong, Tao
    Chen, Dan
    Wang, Guangping
    Zhang, Weicai
    Zhang, Junqi
    Ouyang, Zhongchuan
    Zhang, Fan
    Sun, Ruifeng
    Ji, Jiancheng Charles
    Chen, Wei
    [J]. FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2024, 11
  • [3] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    [J]. 2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540
  • [4] Multimodal Fusion as Communicative Acts during Human-Robot Interaction
    Alonso-Martin, Fernando
    Gorostiza, Javier F.
    Malfaz, Maria
    Salichs, Miguel A.
    [J]. CYBERNETICS AND SYSTEMS, 2013, 44 (08) : 681 - 703
  • [5] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    [J]. UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [6] Designing a Multimodal Human-Robot Interaction Interface for an Industrial Robot
    Mocan, Bogdan
    Fulea, Mircea
    Brad, Stelian
    [J]. ADVANCES IN ROBOT DESIGN AND INTELLIGENT CONTROL, 2016, 371 : 255 - 263
  • [7] Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot
    Stiefelhagen, Rainer
    Ekenel, Hazim Kemal
    Fugen, Christian
    Gieselmann, Petra
    Holzapfel, Hartwig
    Kraft, Florian
    Nickel, Kai
    Voit, Michael
    Waibel, Alex
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (05) : 840 - 851
  • [8] Object affordance based multimodal fusion for natural Human-Robot interaction
    Mi, Jinpeng
    Tang, Song
    Deng, Zhen
    Goerner, Michael
    Zhang, Jianwei
    [J]. COGNITIVE SYSTEMS RESEARCH, 2019, 54 : 128 - 137
  • [9] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    [J]. FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [10] A Dialogue System for Multimodal Human-Robot Interaction
    Lucignano, Lorenzo
    Cutugno, Francesco
    Rossi, Silvia
    Finzi, Alberto
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 197 - 204