Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction

被引:20
|
作者
Strazdas, Dominykas [1 ]
Hintz, Jan [1 ]
Khalifa, Aly [1 ]
Abdelrahman, Ahmed A. [1 ]
Hempel, Thorsten [1 ]
Al-Hamadi, Ayoub [1 ]
机构
[1] Otto von Guericke Univ, Neuroinformat Technol, D-39106 Magdeburg, Germany
关键词
augmented reality; activity recognition; cooperative systems; facial recognition; gesture recognition; human-robot interaction; interactive systems; robot control; speech recognition; ENGAGEMENT; INTERFACE; FEEDBACK;
D O I
10.3390/s22030923
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findings, we design and implement a new multi-modal system for contactless human-machine interaction based on speech, facial, and gesture recognition. We evaluate our proposed system in an extensive study with multiple subjects to examine the user experience and interaction efficiency. It reports that our method achieves similar usability scores compared to the entirely human remote-controlled robot interaction in our Wizard of Oz study. Furthermore, our framework's implementation is based on the Robot Operating System (ROS), allowing modularity and extendability for our multi-device and multi-user method.
引用
收藏
页数:24
相关论文
共 50 条
  • [1] Robot System Assistant (RoSA): concept for an intuitive multi-modal and multi-device interaction system
    Strazdas, Dominykas
    Hintz, Jan
    Khalifa, Aly
    Al-Hamadi, Ayoub
    [J]. PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2021, : 247 - 250
  • [2] Intuitive Multi-modal Human-Robot Interaction via Posture and Voice
    Lai, Yuzhi
    Radke, Mario
    Nassar, Youssef
    Gopal, Atmaraaj
    Weber, Thomas
    Liu, ZhaoHua
    Zhang, Yihong
    Raetsch, Matthias
    [J]. ROBOTICS, COMPUTER VISION AND INTELLIGENT SYSTEMS, ROBOVIS 2024, 2024, 2077 : 441 - 456
  • [3] Multi-modal anchoring for human-robot interaction
    Fritsch, J
    Kleinehagenbrock, M
    Lang, S
    Plötz, T
    Fink, GA
    Sagerer, G
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 43 (2-3) : 133 - 147
  • [4] Multi-modal interfaces for natural Human-Robot Interaction
    Andronas, Dionisis
    Apostolopoulos, George
    Fourtakas, Nikos
    Makris, Sotiris
    [J]. 10TH CIRP SPONSORED CONFERENCE ON DIGITAL ENTERPRISE TECHNOLOGIES (DET 2020) - DIGITAL TECHNOLOGIES AS ENABLERS OF INDUSTRIAL COMPETITIVENESS AND SUSTAINABILITY, 2021, 54 : 197 - 202
  • [5] Multi-modal Language Models for Human-Robot Interaction
    Janssens, Ruben
    [J]. COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 109 - 111
  • [6] Human-Robot Interaction with Multi-Human Social Pattern Inference on a Multi-Modal Robot
    Tseng, Shih-Huan
    Wu, Tung-Yen
    Cheng, Ching-Ying
    Fu, Li-Chen
    [J]. 2014 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2014), 2014, : 819 - 824
  • [7] A Multi-modal Gesture Recognition System in a Human-Robot Interaction Scenario
    Li, Zhi
    Jarvis, Ray
    [J]. 2009 IEEE INTERNATIONAL WORKSHOP ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2009), 2009, : 41 - 46
  • [8] Continuous Multi-Modal Interaction Causes Human-Robot Alignment
    Wallkotter, Sebastian
    Joannou, Michael
    Westlake, Samuel
    Belphaeme, Tony
    [J]. PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON HUMAN AGENT INTERACTION (HAI'17), 2017, : 375 - 379
  • [9] Multi-modal human-robot interface for interaction with a remotely operating mobile service robot
    Fischer, C
    Schmidt, G
    [J]. ADVANCED ROBOTICS, 1998, 12 (04) : 397 - 409
  • [10] Bidirectional Multi-modal Signs of Checking Human-Robot Engagement and Interaction
    Umberto Maniscalco
    Pietro Storniolo
    Antonio Messina
    [J]. International Journal of Social Robotics, 2022, 14 : 1295 - 1309