Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks

被引:0
|
作者
Campbell, Joseph [1 ]
Stepputtis, Simon [1 ]
Amor, Heni Ben [1 ]
机构
[1] Arizona State Univ, Sch Comp Informat & Decis Syst Engn, Tempe, AZ 85287 USA
基金
美国国家科学基金会;
关键词
DATA ASSIMILATION; FILTERS;
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Human-robot interaction benefits greatly from multimodal sensor inputs as they enable increased robustness and generalization accuracy. Despite this observation, few HRI methods are capable of efficiently performing inference for multimodal systems. In this work, we introduce a reformulation of Interaction Primitives which allows for learning from demonstration of interaction tasks, while also gracefully handling nonlinearities inherent to multimodal inference in such scenarios. We also empirically show that our method results in more accurate, more robust, and faster inference than standard Interaction Primitives and other common methods in challenging HRI scenarios.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction
    Schmerling, Edward
    Leung, Karen
    Vollprecht, Wolf
    Pavone, Marco
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 3399 - 3406
  • [2] Probabilistic movement modeling for intention inference in human-robot interaction
    Wang, Zhikun
    Muelling, Katharina
    Deisenroth, Marc Peter
    Ben Amor, Heni
    Vogt, David
    Schoelkopf, Bernhard
    Peters, Jan
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (07): : 841 - 858
  • [3] Navigation for human-robot interaction tasks
    Althaus, P
    Ishiguro, H
    Kanda, T
    Miyashita, T
    Christensen, HI
    [J]. 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, : 1894 - 1900
  • [4] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    [J]. UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [5] Interaction Primitives for Human-Robot Cooperation Tasks
    Ben Amor, Heni
    Neumann, Gerhard
    Kamthe, Sanket
    Kroemer, Oliver
    Peters, Jan
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2014, : 2831 - 2837
  • [6] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    [J]. FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [7] A Dialogue System for Multimodal Human-Robot Interaction
    Lucignano, Lorenzo
    Cutugno, Francesco
    Rossi, Silvia
    Finzi, Alberto
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 197 - 204
  • [8] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    [J]. 2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540
  • [9] Affective Human-Robot Interaction with Multimodal Explanations
    Zhu, Hongbo
    Yu, Chuang
    Cangelosi, Angelo
    [J]. SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 241 - 252
  • [10] Improving Human-Robot Interaction by a Multimodal Interface
    Ubeda, Andres
    Ianez, Eduardo
    Azorin, Jose M.
    Sabater, Jose M.
    Garcia, Nicolas M.
    Perez, Carlos
    [J]. IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010, : 3580 - 3585