MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions From Demonstrations

被引:0
|
作者
Prasad, Vignesh [1 ]
Kshirsagar, Alap [1 ]
Koert, Dorothea [1 ,2 ]
Stock-Homburg, Ruth [3 ]
Peters, Jan [1 ,2 ,4 ,5 ]
Chalvatzaki, Georgia [1 ,5 ]
机构
[1] Tech Univ Darmstadt, Dept Comp Sci, D-64289 Darmstadt, Germany
[2] Tech Univ Darmstadt, Ctr Cognit Sci, D-64289 Darmstadt, Germany
[3] Tech Univ Darmstadt, Chair Mkt & HR Management, Dept Law & Econ, D-64289 Darmstadt, Germany
[4] German Res Ctr AI, D-64293 Darmstadt, Germany
[5] Hessian Ctr Artificial Intelligence, D-64293 Darmstadt, Germany
来源
关键词
Robots; Human-robot interaction; Hidden Markov models; Task analysis; Robot motion; Neural networks; Adaptation models; Physical human-robot interaction; imitation learning; learning from demonstration;
D O I
10.1109/LRA.2024.3396074
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Shared dynamics models are important for capturing the complexity and variability inherent in Human-Robot Interaction (HRI). Therefore, learning such shared dynamics models can enhance coordination and adaptability to enable successful reactive interactions with a human partner. In this work, we propose a novel approach for learning a shared latent space representation for HRIs from demonstrations in a Mixture of Experts fashion for reactively generating robot actions from human observations. We train a Variational Autoencoder (VAE) to learn robot motions regularized using an informative latent space prior that captures the multimodality of the human observations via a Mixture Density Network (MDN). We show how our formulation derives from a Gaussian Mixture Regression formulation that is typically used approaches for learning HRI from demonstrations such as using an HMM/GMM for learning a joint distribution over the actions of the human and the robot. We further incorporate an additional regularization to prevent "mode collapse", a common phenomenon when using latent space mixture models with VAEs. We find that our approach of using an informative MDN prior from human observations for a VAE generates more accurate robot motions compared to previous HMM-based or recurrent approaches of learning shared latent representations, which we validate on various HRI datasets involving interactions such as handshakes, fistbumps, waving, and handovers. Further experiments in a real-world human-to-robot handover scenario show the efficacy of our approach for generating successful interactions with four different human interaction partners.
引用
收藏
页码:6043 / 6050
页数:8
相关论文
共 50 条
  • [1] Learning Turn-Taking Behavior from Human Demonstrations for Social Human-Robot Interactions
    Shahverdi, Pourya
    Tyshka, Alexander
    Trombly, Madeline
    Louie, Wing-Yue Geoffrey
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 7643 - 7649
  • [2] Learning Human-Robot Interactions from Human-Human Demonstrations (with Applications in Lego Rocket Assembly)
    Vogt, David
    Stepputtis, Simon
    Weinhold, Richard
    Jung, Bernhard
    Ben Amor, Heni
    [J]. 2016 IEEE-RAS 16TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2016, : 142 - 143
  • [3] Learning from Demonstrations in Human-Robot Collaborative Scenarios: A Survey
    Daniel Sosa-Ceron, Arturo
    Gustavo Gonzalez-Hernandez, Hugo
    Antonio Reyes-Avendano, Jorge
    [J]. ROBOTICS, 2022, 11 (06)
  • [4] Learning from Human-Robot Interactions in Modeled Scenes
    Murnane, Mark
    Breitmeyer, Max
    Ferraro, Francis
    Matuszek, Cynthia
    Engel, Don
    [J]. SIGGRAPH '19 - ACM SIGGRAPH 2019 POSTERS, 2019,
  • [5] Learning Legible Motion from Human-Robot Interactions
    Busch, Baptiste
    Grizou, Jonathan
    Lopes, Manuel
    Stulp, Freek
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2017, 9 (05) : 765 - 779
  • [6] Facilitating Human-Robot Collaborative Tasks by Teaching-Learning-Collaboration From Human Demonstrations
    Wang, Weitian
    Li, Rui
    Chen, Yi
    Diekel, Z. Max
    Jia, Yunyi
    [J]. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2019, 16 (02) : 640 - 653
  • [7] Errors in Human-Robot Interactions and Their Effects on Robot Learning
    Kim, Su Kyoung
    Kirchner, Elsa Andrea
    Schlossmueller, Lukas
    Kirchner, Frank
    [J]. FRONTIERS IN ROBOTICS AND AI, 2020, 7
  • [8] Human-Robot Interaction Using Learning from Demonstrations and a Wearable Glove with Multiple Sensors
    Singh, Rajmeet
    Mozaffari, Saeed
    Akhshik, Masoud
    Ahamed, Mohammed Jalal
    Rondeau-Gagne, Simon
    Alirezaee, Shahpour
    [J]. SENSORS, 2023, 23 (24)
  • [9] Learning Sequential Human-Robot Interaction Tasks from Demonstrations: The Role of Temporal Reasoning
    Carpio, Estuardo
    Clark-Turner, Madison
    Begum, Momotaz
    [J]. 2019 28TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2019,
  • [10] Quantifying Hypothesis Space Misspecification in Learning From Human-Robot Demonstrations and Physical Corrections
    Bobu, Andreea
    Bajcsy, Andrea
    Fisac, Jaime F.
    Deglurkar, Sampada
    Dragan, Anca D.
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (03) : 835 - 854