Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction

被引:20
|
作者
Strazdas, Dominykas [1 ]
Hintz, Jan [1 ]
Khalifa, Aly [1 ]
Abdelrahman, Ahmed A. [1 ]
Hempel, Thorsten [1 ]
Al-Hamadi, Ayoub [1 ]
机构
[1] Otto von Guericke Univ, Neuroinformat Technol, D-39106 Magdeburg, Germany
关键词
augmented reality; activity recognition; cooperative systems; facial recognition; gesture recognition; human-robot interaction; interactive systems; robot control; speech recognition; ENGAGEMENT; INTERFACE; FEEDBACK;
D O I
10.3390/s22030923
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findings, we design and implement a new multi-modal system for contactless human-machine interaction based on speech, facial, and gesture recognition. We evaluate our proposed system in an extensive study with multiple subjects to examine the user experience and interaction efficiency. It reports that our method achieves similar usability scores compared to the entirely human remote-controlled robot interaction in our Wizard of Oz study. Furthermore, our framework's implementation is based on the Robot Operating System (ROS), allowing modularity and extendability for our multi-device and multi-user method.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Multi-modal referring expressions in human-human task descriptions and their implications for human-robot interaction
    Gross, Stephanie
    Krenn, Brigitte
    Scheutz, Matthias
    [J]. INTERACTION STUDIES, 2016, 17 (02) : 180 - 210
  • [32] Multi-modal Proactive Approaching of Humans for Human-Robot Cooperative Tasks
    Naik, Lakshadeep
    Palinko, Oskar
    Bodenhagen, Leon
    Krueger, Norbert
    [J]. 2021 30TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2021, : 323 - 329
  • [33] Multi-modal feature fusion for better understanding of human personality traits in social human-robot interaction
    Shen, Zhihao
    Elibol, Armagan
    Chong, Nak Young
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2021, 146
  • [34] Multi-modal Robot Apprenticeship: Imitation Learning using Linearly Decayed DMP plus in a Human-Robot Dialogue System
    Wu, Yan
    Wang, Ruohan
    D'Haro, Luis F.
    Banchs, Rafael E.
    Tee, Keng Peng
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 8582 - 8588
  • [35] Development of multi-modal interfaces in multi-device environments
    Berti, S
    Paternò, F
    [J]. HUMAN-COMPUTER INTERACTION - INTERACT 2005, PROCEEDINGS, 2005, 3585 : 1067 - 1070
  • [36] Multi-modal communication system for mobile robot
    Svec, Jan
    Neduchal, Petr
    Hruz, Marek
    [J]. IFAC PAPERSONLINE, 2022, 55 (04): : 133 - 138
  • [37] Intuitive Physical Human-Robot Interaction
    Badeau, Nicolas
    Gosselin, Clement
    Foucault, Simon
    Laliberte, Thierry
    Abdallah, Muhammad E.
    [J]. IEEE ROBOTICS & AUTOMATION MAGAZINE, 2018, 25 (02) : 28 - 38
  • [38] Navigating to Success in Multi-Modal Human-Robot Collaboration: Analysis and Corpus Release
    Lukin, Stephanie M.
    Pollard, Kimberly A.
    Bonial, Claire
    Hudson, Taylor
    Artstein, Ron
    Voss, Clare
    Traum, David
    [J]. 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 1859 - 1865
  • [39] Multi-Modal Humanoid Robot
    Thoshith, S.
    Mulgund, Samarth
    Sindgi, Praveen
    Yogesh, N.
    Kumaraswamy, R.
    [J]. 2018 INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET), 2018,
  • [40] Multi-modal robot interfaces
    [J]. Springer Tracts in Advanced Robotics, 2005, 14 : 5 - 7