Biologically inspired multimodal integration: Interferences in a human-robot interaction game

被引:7
|
作者
Sauser, Eric L. [1 ]
Billard, Aude G. [1 ]
机构
[1] Ecole Polytech Fed Lausanne, LASA, Learning Algorithms & Syst Lab, CH-1015 Lausanne, Switzerland
来源
2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12 | 2006年
基金
瑞士国家科学基金会;
关键词
D O I
10.1109/IROS.2006.282283
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents a biologically inspired approach to multimodal integration and decision-making in the context of human-robot interactions. More specifically, we address the principle of ideomotor compatibility by which observing the movements of others influences the quality of one's own performance. This fundamental human ability is likely to be linked with human imitation abilities, social interactions, the transfer of manual skills, and probably to mind reading. We present a robotic control model capable of integrating multimodal information, decision making, and replicating a stimulus-response compatibility task, originally designed to measure the effect of ideomotor compatibility on human behavior. The model consists of a neural network based on the dynamic field approach, which is known for its natural ability for stimulus enhancement as well as cooperative and competitive interactions within and across sensorimotor representations. Finally, we discuss how the capacity for ideomotor facilitation can provide the robot with human-like behavior, but at the expense of several disadvantages, such as hesitation and even mistakes.
引用
收藏
页码:5619 / +
页数:2
相关论文
共 50 条
  • [31] Human-Robot Interaction and Collaborative Manipulation with Multimodal Perception Interface for Human
    Huang, Shouren
    Ishikawa, Masatoshi
    Yamakawa, Yuji
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 289 - 291
  • [32] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10
  • [33] Comparing alternative modalities in the context of multimodal human-robot interaction
    Saren, Suprakas
    Mukhopadhyay, Abhishek
    Ghose, Debasish
    Biswas, Pradipta
    JOURNAL ON MULTIMODAL USER INTERFACES, 2024, 18 (01) : 69 - 85
  • [34] Research on multimodal human-robot interaction based on speech and gesture
    Deng Yongda
    Li Fang
    Xin Huang
    COMPUTERS & ELECTRICAL ENGINEERING, 2018, 72 : 443 - 454
  • [35] Multimodal emotion recognition with evolutionary computation for human-robot interaction
    Perez-Gaspar, Luis-Alberto
    Caballero-Morales, Santiago-Omar
    Trujillo-Romero, Felipe
    EXPERT SYSTEMS WITH APPLICATIONS, 2016, 66 : 42 - 61
  • [36] Speech to Head Gesture Mapping in Multimodal Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    SERVICE ORIENTATION IN HOLONIC AND MULTI-AGENT MANUFACTURING CONTROL, 2012, 402 : 183 - 196
  • [37] Evaluations of embedded Modules dedicated to multimodal Human-Robot Interaction
    Burger, Brice
    Lerasle, Frederic
    Ferrane, Isabelle
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 341 - +
  • [38] Multimodal Approach to Affective Human-Robot Interaction Design with Children
    Okita, Sandra Y.
    Ng-Thow-Hing, Victor
    Sarvadevabhatla, Ravi K.
    ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2011, 1 (01)
  • [39] Real-time Framework for Multimodal Human-Robot Interaction
    Gast, Juergen
    Bannat, Alexander
    Rehrl, Tobias
    Wallhoff, Frank
    Rigoll, Gerhard
    Wendt, Cornelia
    Schmidt, Sabrina
    Popp, Michael
    Faerber, Berthold
    HSI: 2009 2ND CONFERENCE ON HUMAN SYSTEM INTERACTIONS, 2009, : 273 - 280
  • [40] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826