Multimodal fusion and human-robot interaction control of an intelligent robot

被引:1
|
作者
Gong, Tao [1 ]
Chen, Dan [1 ]
Wang, Guangping [2 ]
Zhang, Weicai [2 ]
Zhang, Junqi [2 ]
Ouyang, Zhongchuan [2 ]
Zhang, Fan [2 ]
Sun, Ruifeng [2 ]
Ji, Jiancheng Charles [1 ]
Chen, Wei [1 ]
机构
[1] Shenzhen Polytech Univ, Inst Intelligent Mfg, Shenzhen, Peoples R China
[2] AVIC Changhe Aircraft Ind Grp Corp Ltd, Jingdezhen, Peoples R China
关键词
kinematic modeling; robotic walker; multimodal fusion; human-robot interaction control; stroke; WALKER; STROKE;
D O I
10.3389/fbioe.2023.1310247
中图分类号
Q81 [生物工程学(生物技术)]; Q93 [微生物学];
学科分类号
071005 ; 0836 ; 090102 ; 100705 ;
摘要
Introduction: Small-scaled robotic walkers play an increasingly important role in Activity of Daily Living (ADL) assistance in the face of ever-increasing rehab requirements and existing equipment drawbacks. This paper proposes a Rehabilitation Robotic Walker (RRW) for walking assistance and body weight support (BWS) during gait rehabilitation.Methods: The walker provides the patients with weight offloading and guiding force to mimic a series of the physiotherapist's (PT's) movements, and creates a natural, comfortable, and safe environment. This system consists of an omnidirectional mobile platform, a BWS mechanism, and a pelvic brace to smooth the motions of the pelvis. To recognize the human intentions, four force sensors, two joysticks, and one depth-sensing camera were used to monitor the human-machine information, and a multimodal fusion algorithm for intention recognition was proposed to improve the accuracy. Then the system obtained the heading angle E, the pelvic pose F, and the motion vector H via the camera, the force sensors, and the joysticks respectively, classified the intentions with feature extraction and information fusion, and finally outputted the motor speed control through the robot's kinematics.Results: To validate the validity of the algorithm above, a preliminary test with three volunteers was conducted to study the motion control. The results showed that the average error of the integral square error (ISE) was 2.90 and the minimum error was 1.96.Discussion: The results demonstrated the efficiency of the proposed method, and that the system is capable of providing walking assistance.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Multimodal Engagement Prediction in Multiperson Human-Robot Interaction
    Abdelrahman, Ahmed A.
    Strazdas, Dominykas
    Khalifa, Aly
    Hintz, Jan
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    [J]. IEEE ACCESS, 2022, 10 : 61980 - 61991
  • [32] Knowledge acquisition through human-robot multimodal interaction
    Randelli, Gabriele
    Bonanni, Taigo Maria
    Iocchi, Luca
    Nardi, Daniele
    [J]. INTELLIGENT SERVICE ROBOTICS, 2013, 6 (01) : 19 - 31
  • [33] Multimodal Adapted Robot Behavior Synthesis within a Narrative Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    [J]. 2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 2986 - 2993
  • [34] Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence
    Zhang, Zhengyou
    [J]. ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 2 - 2
  • [35] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [36] DiGeTac Unit for Multimodal Communication in Human-Robot Interaction
    Al, Gorkem Anil
    Martinez-Hernandez, Uriel
    [J]. IEEE SENSORS LETTERS, 2024, 8 (05)
  • [37] Multimodal QOL Estimation During Human-Robot Interaction
    Nakagawa, Satoshi
    Kuniyoshi, Yasuo
    [J]. 2024 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH 2024, 2024, : 23 - 32
  • [38] Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks
    Campbell, Joseph
    Stepputtis, Simon
    Amor, Heni Ben
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [39] Multimodal Target Prediction for Rapid Human-Robot Interaction
    Mitra, Mukund
    Patil, Ameya
    Mothish, G. V. S.
    Kumar, Gyanig
    Mukhopadhyay, Abhishek
    Murthy, L. R. D.
    Chakrabarti, Partha Pratim
    Biswas, Pradipta
    [J]. COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 18 - 23
  • [40] FORCE/POSITION CONTROL OF ROBOT MANIPULATOR FOR HUMAN-ROBOT INTERACTION
    Neranon, Paramin
    Bicker, Robert
    [J]. THERMAL SCIENCE, 2016, 20 : S537 - S548