A multimodal teleoperation interface for human-robot collaboration

被引:2
|
作者
Si, Weiyong [1 ,2 ]
Zhong, Tianjian [3 ]
Wang, Ning [1 ,2 ]
Yang, Chenguang [1 ,2 ]
机构
[1] Univ West England, Fac Environm & Technol, Bristol BS16 1QY, Avon, England
[2] Univ West England, Bristol Robot Lab, Bristol BS16 1QY, Avon, England
[3] Bristol Robot Lab, Bristol BS8 1TH, Avon, England
关键词
Immersive teleoperation; Human-in-the-loop; Human-robot interface;
D O I
10.1109/ICM54990.2023.10102060
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human-robot collaboration provides an effective approach to combine human intelligence and the autonomy of robots, which can improve the safety and efficiency of the robot. However, developing an intuitive and immersive human-robot interface with multimodal feedback for human-robot interaction and collaboration is still challenging. In this paper, we developed a multimodal-based human-robot interface to involve humans in the loop. The Unity-based virtual reality (VR) environment, including the virtual robot manipulator and its working environment, was developed to simulate the real working environment of robots. We integrated the digital twin mechanism with the VR environment development, which provides a corresponding model with the physical task. The virtual environment could visualize the visual and haptic feedback through the multimodal sensors in the robot, which provides an immersive and friendly teleoperating environment for human operators. We conduct user study experiments based on NASA Task Load Index, through a physical contact scanning task. The result shows that the proposed multimodal interface improved by 31.8% in terms of the cognitive and physical workload, comparing with the commercial teleportation device Touch X.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] A Gesture-based Multimodal Interface for Human-Robot Interaction
    Uimonen, Mikael
    Kemppi, Paul
    Hakanen, Taru
    [J]. 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 165 - 170
  • [22] A Multimodal Human-Robot Interface to Drive a Neuroprosthesis for Tremor Management
    Alvaro Gallego, Juan
    Ibanez, Jaime
    Dideriksen, Jakob Lund
    Ignacio Serrano, Jose
    Dolores del Castillo, Maria
    Farina, Dario
    Rocon, Eduardo
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2012, 42 (06): : 1159 - 1168
  • [23] Interface architecture design for minimum programming in human-robot collaboration
    Ji, Wei
    Wang, Yuquan
    Liu, Hongyi
    Wang, Lihui
    [J]. 51ST CIRP CONFERENCE ON MANUFACTURING SYSTEMS, 2018, 72 : 129 - 134
  • [24] Editorial: Neural Interface for Cognitive Human-Robot Interaction and Collaboration
    Huang, Rui
    Zeng, Lian
    Cheng, Hong
    Guo, Xiaodong
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [25] An Electrical Impedance Tomography Based Interface for Human-Robot Collaboration
    Zheng, Enhao
    Li, Yuhua
    Zhao, Zhiyu
    Wang, Qining
    Qiao, Hong
    [J]. IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2021, 26 (05) : 2373 - 2384
  • [26] Enhancing Safe Human-Robot Collaboration through Natural Multimodal Communication
    Maurtua, Inaki
    Fernandez, Izaskun
    Kildal, Johan
    Susperregi, Loreto
    Tellaeche, Alberto
    Ibarguren, Aitor
    [J]. 2016 IEEE 21ST INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2016,
  • [27] Role Switching in Task-Oriented Multimodal Human-Robot Collaboration
    Monaikul, Natawut
    Abbasi, Bahareh
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    [J]. 2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2020, : 1150 - 1156
  • [28] Interactive Force Control Based on Multimodal Robot Skin for Physical Human-Robot Collaboration
    Armleder, Simon
    Dean-Leon, Emmanuel
    Bergner, Florian
    Cheng, Gordon
    [J]. ADVANCED INTELLIGENT SYSTEMS, 2022, 4 (02)
  • [29] A Framework and Algorithm for Human-Robot Collaboration Based on Multimodal Reinforcement Learning
    Cai, Zeyuan
    Feng, Zhiquan
    Zhou, Liran
    Ai, Changsheng
    Shao, Haiyan
    Yang, Xiaohui
    [J]. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [30] A dual mode human-robot teleoperation interface based on airflow in the aural cavity
    Vaidyanathan, Ravi
    Fargues, Monique P.
    Kurcan, R. Serdar
    Gupta, Lalit
    Kota, Srinivas
    Quinn, Roger D.
    Lin, Dong
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2007, 26 (11--12): : 1205 - 1223