Robot System Assistant (RoSA): concept for an intuitive multi-modal and multi-device interaction system

被引:0
|
作者
Strazdas, Dominykas [1 ]
Hintz, Jan [1 ]
Khalifa, Aly [1 ]
Al-Hamadi, Ayoub [1 ]
机构
[1] Otto von Guericke Univ, Neuro Informat Technol, Magdeburg, Germany
关键词
Speech recognition; Cooperative systems; Gesture recognition; Facial recognition; Human-robot interaction; Intelligent robots; Interactive systems; Robot control; Robot learning; Telerobotics;
D O I
10.1109/ICHMS53169.2021.9582663
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents RoSA, the Robot System Assistant, a concept for intuitive human-machine-interaction based on speech, facial, and gesture recognition. The interaction modalities were found and reviewed through a preceding wizard-of-oz study showing high impact for speech and pointing gestures. The system's framework is based on the Robot Operating System (ROS), allowing modularity and extendability. This contactless concept also includes ideas for multi-device and multi-user implementation, using different workstations.
引用
收藏
页码:247 / 250
页数:4
相关论文
共 50 条
  • [1] Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction
    Strazdas, Dominykas
    Hintz, Jan
    Khalifa, Aly
    Abdelrahman, Ahmed A.
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    [J]. SENSORS, 2022, 22 (03)
  • [2] Designing Multi-Modal Multi-Device Interfaces
    Berti, Silvia
    Paterno, Fabio
    [J]. ERCIM NEWS, 2005, (62): : 40 - 41
  • [3] Development of multi-modal interfaces in multi-device environments
    Berti, S
    Paternò, F
    [J]. HUMAN-COMPUTER INTERACTION - INTERACT 2005, PROCEEDINGS, 2005, 3585 : 1067 - 1070
  • [4] An input widget framework for multi-modal and multi-device environments
    Kobayashi, N
    Tokunaga, E
    Kimura, H
    Hirakawa, Y
    Ayabe, M
    Nakajima, T
    [J]. THIRD IEEE WORKSHOP ON SOFTWARE TECHNOLOGIES FOR FUTURE EMBEDDED AND UBIQUITOUS SYSTEMS, PROCEEDINGS, 2005, : 63 - 70
  • [5] Multi-Modal Multi sensor Interaction between Human and Heterogeneous Multi-Robot System
    Al Mahi, S. M.
    [J]. ICMI'18: PROCEEDINGS OF THE 20TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2018, : 524 - 528
  • [6] Multi-Modal Interaction Device
    Kim, Yul Hee
    Byeon, Sang-Kyu
    Kim, Yu-Joon
    Choi, Dong-Soo
    Kim, Sang-Youn
    [J]. INTERNATIONAL CONFERENCE ON MECHANICAL DESIGN, MANUFACTURE AND AUTOMATION ENGINEERING (MDMAE 2014), 2014, : 327 - 330
  • [7] Multi-modal communication system for mobile robot
    Svec, Jan
    Neduchal, Petr
    Hruz, Marek
    [J]. IFAC PAPERSONLINE, 2022, 55 (04): : 133 - 138
  • [8] Intuitive Multi-modal Human-Robot Interaction via Posture and Voice
    Lai, Yuzhi
    Radke, Mario
    Nassar, Youssef
    Gopal, Atmaraaj
    Weber, Thomas
    Liu, ZhaoHua
    Zhang, Yihong
    Raetsch, Matthias
    [J]. ROBOTICS, COMPUTER VISION AND INTELLIGENT SYSTEMS, ROBOVIS 2024, 2024, 2077 : 441 - 456
  • [9] A Multi-modal Gesture Recognition System in a Human-Robot Interaction Scenario
    Li, Zhi
    Jarvis, Ray
    [J]. 2009 IEEE INTERNATIONAL WORKSHOP ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2009), 2009, : 41 - 46
  • [10] An Introduction to the Multi-Modal Multi-Robot (MuMoMuRo) Control System
    Tse, Jason T. P.
    Chan, Stephen C. F.
    Ngai, Grace
    [J]. IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,