Intuitive Multi-modal Human-Robot Interaction via Posture and Voice

被引:0
|
作者
Lai, Yuzhi [1 ]
Radke, Mario [1 ]
Nassar, Youssef [1 ]
Gopal, Atmaraaj [1 ]
Weber, Thomas [1 ]
Liu, ZhaoHua [2 ]
Zhang, Yihong [3 ]
Raetsch, Matthias [1 ]
机构
[1] Reutlingen Univ, D-72762 Reutlingen, Germany
[2] Hunan Univ Sci & Technol, Xiangtan 411199, Peoples R China
[3] Donghua Univ, Shanghai 201620, Peoples R China
关键词
Human-Robot collaboration; Multi-Modal control; Intent recognition;
D O I
10.1007/978-3-031-59057-3_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Collaborative robots promise to greatly improve the quality-of-life for the aging population and also easing elder care. However existing systems often rely on hand gestures, which can be restrictive and less accessible for users with cognitive disability. This paper introduces a multi-modal command input, which combines voice and deictic postures, to create a natural humanrobot interaction. In addition, we combine our system with a chatbot to make the interaction responsive. The demonstrated deictic postures, voice and the perceived table-top scene are processed in real-time to extract the human's intention. The system is evaluated for increasingly complex tasks using a real Universal Robots UR3e 6-DoF robot arm. The preliminary results demonstrate a high success rate in task completion and a notable improvement compared to gesture-based systems. Controlling robots through multi-modal commands, as opposed to gesture control, can save up to 48.1% of the time taken to issue commands to the robot. Our system adeptly integrates the advantages of voice commands and deictic postures to facilitate intuitive human-robot interaction. Compared to conventional gesture control methods, our approach requires minimal training, eliminating the need to memorize complex gestures, and results in shorter interaction times.
引用
收藏
页码:441 / 456
页数:16
相关论文
共 50 条
  • [1] Multi-modal anchoring for human-robot interaction
    Fritsch, J
    Kleinehagenbrock, M
    Lang, S
    Plötz, T
    Fink, GA
    Sagerer, G
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 43 (2-3) : 133 - 147
  • [2] Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction
    Strazdas, Dominykas
    Hintz, Jan
    Khalifa, Aly
    Abdelrahman, Ahmed A.
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    [J]. SENSORS, 2022, 22 (03)
  • [3] Multi-modal interfaces for natural Human-Robot Interaction
    Andronas, Dionisis
    Apostolopoulos, George
    Fourtakas, Nikos
    Makris, Sotiris
    [J]. 10TH CIRP SPONSORED CONFERENCE ON DIGITAL ENTERPRISE TECHNOLOGIES (DET 2020) - DIGITAL TECHNOLOGIES AS ENABLERS OF INDUSTRIAL COMPETITIVENESS AND SUSTAINABILITY, 2021, 54 : 197 - 202
  • [4] Multi-modal Language Models for Human-Robot Interaction
    Janssens, Ruben
    [J]. COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 109 - 111
  • [5] Continuous Multi-Modal Interaction Causes Human-Robot Alignment
    Wallkotter, Sebastian
    Joannou, Michael
    Westlake, Samuel
    Belphaeme, Tony
    [J]. PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON HUMAN AGENT INTERACTION (HAI'17), 2017, : 375 - 379
  • [6] Human-Robot Interaction with Multi-Human Social Pattern Inference on a Multi-Modal Robot
    Tseng, Shih-Huan
    Wu, Tung-Yen
    Cheng, Ching-Ying
    Fu, Li-Chen
    [J]. 2014 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2014), 2014, : 819 - 824
  • [7] Bidirectional Multi-modal Signs of Checking Human-Robot Engagement and Interaction
    Umberto Maniscalco
    Pietro Storniolo
    Antonio Messina
    [J]. International Journal of Social Robotics, 2022, 14 : 1295 - 1309
  • [8] A Multi-modal Gesture Recognition System in a Human-Robot Interaction Scenario
    Li, Zhi
    Jarvis, Ray
    [J]. 2009 IEEE INTERNATIONAL WORKSHOP ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2009), 2009, : 41 - 46
  • [9] A Multi-modal Sensor Array for Safe Human-Robot Interaction and Mapping
    Abah, Colette
    Orekhov, Andrew L.
    Johnston, Garrison L. H.
    Yin, Peng
    Choset, Howie
    Simaan, Nabil
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 3768 - 3774
  • [10] Bidirectional Multi-modal Signs of Checking Human-Robot Engagement and Interaction
    Maniscalco, Umberto
    Storniolo, Pietro
    Messina, Antonio
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2022, 14 (05) : 1295 - 1309