Real-Time Control Strategy of Exoskeleton Locomotion Trajectory Based on Multi-modal Fusion

被引:0
|
作者
Tao Zhen
Lei Yan
机构
[1] Beijing Forestry University,College of Engineering
来源
关键词
Exoskeleton; Multi-layer control strategy; Human–machine collaboration; Time-varying adaptive gait; Hybrid intelligent;
D O I
暂无
中图分类号
学科分类号
摘要
The exoskeleton robot is a typical man–machine integration system in the human loop. The ideal man–machine state is to achieve motion coordination, stable output, strong personalization, and reduce man–machine confrontation during motion. In order to achieve an ideal man–machine state, a Time-varying Adaptive Gait Trajectory Generator (TAGT) is designed to estimate the motion intention of the wearer and generate a personalized gait trajectory. TAGT can enhance the hybrid intelligent decision-making ability under human–machine collaboration, promote good motion coordination between the exoskeleton and the wearer, and reduce metabolic consumption. An important feature of this controller is that it utilizes a multi-layer control strategy to provide locomotion assistance to the wearer, while allowing the user to control the gait trajectory based on human–robot Interaction (HRI) force and locomotion information. In this article, a Temporal Convolutional Gait Prediction (TCGP) model is designed to learn the personalized gait trajectory of the wearer, and the control performance of the model is further improved by fusing the predefined gait trajectory method with an adaptive interactive force control model. A human-in-the-loop control strategy is formed with the feedback information to stabilize the motion trajectory of the output joints and update the system state in real time based on the feedback from the inertial and interactive force signal. The experimental study employs able-bodied subjects wearing the exoskeleton for motion trajectory control to evaluate the performance of the proposed TAGT model in online adjustments. Data from these evaluations demonstrate that the controller TAGT has good motor coordination and can satisfy the subject to control the motor within a certain range according to the walking habit, guaranteeing the stability of the closed-loop system.
引用
收藏
页码:2670 / 2682
页数:12
相关论文
共 50 条
  • [41] HYPERSPECTRAL IMAGES AND LIDAR BASED DEM FUSION: A MULTI-MODAL LANDUSE CLASSIFICATION STRATEGY
    Demirkesen, Can
    Teke, Mustafa
    Sakarya, Ufuk
    2014 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2014,
  • [42] CIRF: Coupled Image Reconstruction and Fusion Strategy for Deep Learning Based Multi-Modal Image Fusion
    Zheng, Junze
    Xiao, Junyan
    Wang, Yaowei
    Zhang, Xuming
    SENSORS, 2024, 24 (11)
  • [43] Multi-modal deep-learning model for real-time prediction of recurrence in early-stage esophageal cancer: A multi-modal approach
    Jung, H. A.
    Lee, D.
    Park, B.
    Lee, K.
    Lee, H. Y.
    Kim, T. J.
    Jeon, Y. J.
    Lee, J.
    Cho, J. H.
    Kim, H. K.
    Choi, Y. S.
    Park, S.
    Sun, J-M.
    Lee, S-H.
    Ahn, J. S.
    Ahn, M-J.
    ANNALS OF ONCOLOGY, 2024, 35 : S883 - S883
  • [44] Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data
    Zhou, Yuxiao
    Habermann, Marc
    Xu, Weipeng
    Habibie, Ikhsanul
    Theobalt, Christian
    Xu, Feng
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5345 - 5354
  • [45] A Multi-modal System for Public Speaking Pilot Study on Evaluation of Real-Time Feedback
    Dermody, Fiona
    Sutherland, Alistair
    Farren, Margaret
    HUMAN-COMPUTER INTERACTION - INTERACT 2015, PT IV, 2015, 9299 : 499 - 501
  • [46] A Novel Multi-Modal Teleoperation of a Humanoid Assistive Robot with Real-Time Motion Mimic
    Ceron, Julio C.
    Sunny, Md Samiul Haque
    Brahmi, Brahim
    Mendez, Luis M.
    Fareh, Raouf
    Ahmed, Helal Uddin
    Rahman, Mohammad H.
    MICROMACHINES, 2023, 14 (02)
  • [47] COSM2IC: Optimizing Real-Time Multi-Modal Instruction Comprehension
    Weerakoon, Dulanga
    Subbaraju, Vigneshwaran
    Tran, Tuan
    Misra, Archan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10697 - 10704
  • [48] COMPUTER VISION AND COMPUTATIONAL INTELLIGENCE FOR REAL-TIME MULTI-MODAL SPACE DOMAIN AWARENESS
    Bolden, Mark
    Schumacher, Paul
    Spencer, David
    Hussein, Islam
    Wilkins, Matthew
    Roscoe, Christopher
    SPACEFLIGHT MECHANICS 2017, PTS I - IV, 2017, 160 : 2165 - 2178
  • [49] Real-Time Multi-Modal Human-Robot Collaboration Using Gestures and Speech
    Chen, Haodong
    Leu, Ming C.
    Yin, Zhaozheng
    JOURNAL OF MANUFACTURING SCIENCE AND ENGINEERING-TRANSACTIONS OF THE ASME, 2022, 144 (10):
  • [50] Real-time Assistance Control of Hip Exoskeleton Based on Motion Prediction
    Xu L.
    Yang W.
    Yang C.
    Zhang J.
    Wang T.
    Jiqiren/Robot, 2021, 43 (04): : 473 - 483