Biologically inspired multimodal integration: Interferences in a human-robot interaction game

被引:7
|
作者
Sauser, Eric L. [1 ]
Billard, Aude G. [1 ]
机构
[1] Ecole Polytech Fed Lausanne, LASA, Learning Algorithms & Syst Lab, CH-1015 Lausanne, Switzerland
来源
2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12 | 2006年
基金
瑞士国家科学基金会;
关键词
D O I
10.1109/IROS.2006.282283
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a biologically inspired approach to multimodal integration and decision-making in the context of human-robot interactions. More specifically, we address the principle of ideomotor compatibility by which observing the movements of others influences the quality of one's own performance. This fundamental human ability is likely to be linked with human imitation abilities, social interactions, the transfer of manual skills, and probably to mind reading. We present a robotic control model capable of integrating multimodal information, decision making, and replicating a stimulus-response compatibility task, originally designed to measure the effect of ideomotor compatibility on human behavior. The model consists of a neural network based on the dynamic field approach, which is known for its natural ability for stimulus enhancement as well as cooperative and competitive interactions within and across sensorimotor representations. Finally, we discuss how the capacity for ideomotor facilitation can provide the robot with human-like behavior, but at the expense of several disadvantages, such as hesitation and even mistakes.
引用
收藏
页码:5619 / +
页数:2
相关论文
共 50 条
  • [11] Affective Human-Robot Interaction with Multimodal Explanations
    Zhu, Hongbo
    Yu, Chuang
    Cangelosi, Angelo
    SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 241 - 252
  • [12] Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot
    Stiefelhagen, Rainer
    Ekenel, Hazim Kemal
    Fugen, Christian
    Gieselmann, Petra
    Holzapfel, Hartwig
    Kraft, Florian
    Nickel, Kai
    Voit, Michael
    Waibel, Alex
    IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (05) : 840 - 851
  • [13] Designing a Multimodal Human-Robot Interaction Interface for an Industrial Robot
    Mocan, Bogdan
    Fulea, Mircea
    Brad, Stelian
    ADVANCES IN ROBOT DESIGN AND INTELLIGENT CONTROL, 2016, 371 : 255 - 263
  • [14] Multimodal fusion and human-robot interaction control of an intelligent robot
    Gong, Tao
    Chen, Dan
    Wang, Guangping
    Zhang, Weicai
    Zhang, Junqi
    Ouyang, Zhongchuan
    Zhang, Fan
    Sun, Ruifeng
    Ji, Jiancheng Charles
    Chen, Wei
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2024, 11
  • [15] Innovative Human-Robot Interaction for a Robot Tutor in Biology Game
    Saleh, AbdelRahman Ahmed
    Abdelbaki, Nashwa
    2017 18TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2017, : 614 - 619
  • [16] Integration of Gestures and Speech in Human-Robot Interaction
    Meena, Raveesh
    Jokinen, Kristiina
    Wilcock, Graham
    3RD IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFOCOMMUNICATIONS (COGINFOCOM 2012), 2012, : 673 - 678
  • [17] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706
  • [18] Knowledge acquisition through human-robot multimodal interaction
    Randelli, Gabriele
    Bonanni, Taigo Maria
    Iocchi, Luca
    Nardi, Daniele
    INTELLIGENT SERVICE ROBOTICS, 2013, 6 (01) : 19 - 31
  • [19] Multimodal Engagement Prediction in Multiperson Human-Robot Interaction
    Abdelrahman, Ahmed A.
    Strazdas, Dominykas
    Khalifa, Aly
    Hintz, Jan
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    IEEE ACCESS, 2022, 10 : 61980 - 61991
  • [20] Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence
    Zhang, Zhengyou
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 2 - 2