Representation Reinforcement Learning-Based Dense Control for Point Following With State Sparse Sensing of 3-D Snake Robots

被引:1
|
作者
Liu, Lixing [1 ,2 ]
Liu, Jiashun [3 ]
Guo, Xian [1 ,2 ]
Huang, Wei [1 ,2 ]
Fang, Yongchun [1 ,2 ]
Hao, Jianye [3 ]
机构
[1] Nankai Univ, Coll Artificial Intelligence, Inst Robot & Automat Informat Syst, Tianjin 300350, Peoples R China
[2] Nankai Univ, Tianjin Key Lab Intelligent Robot, Tianjin 300350, Peoples R China
[3] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
基金
中国国家自然科学基金;
关键词
Robot sensing systems; Snake robots; Robots; Sensors; Motion control; Training; Crawlers; Standards; Process control; Optimization; 3-D snake robots; biomimetic robots; dense motion control; representation reinforcement learning (RRL); sparse state sensing; GAIT DESIGN;
D O I
10.1109/TMECH.2024.3465018
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
During robot movements, the environmental states often fail to update in real-time due to interference from various factors, such as obstacle obstructions, communication disruptions, etc., which commonly results in interruptions or even failures in motion control. To achieve dense motion control under sparse state sensing, an important challenge is to predict future multiple actions based on sparse states, which is hindered by the large and complex action search space. Unfortunately, limited research has been dedicated to addressing this challenge. Therefore, this article proposes a representation reinforcement learning (RRL) based solution, called Sparse-State to Dense-Actions Latent Control, designed to realize dense motion control of 3-D snake robots subject to sparse environmental state sensing, which guarantees satisfactory point following performance. In particular, by introducing a latent representation of multiple actions, the control policy optimizes latent actions to predict dense motion gaits, which significantly enhances training efficiency and motion performance. Meanwhile, to learn a compact latent variable model, three mechanisms are proposed to ensure its efficient training, semantic smoothness, and energy efficiency, facilitating exploration of the RL algorithm. To the best of our knowledge, this article provides the first solution that enables a 3-D snake robot to successfully accomplish point following tasks under sparse state sensing. Simulation and practical experiments confirm the effectiveness, robustness, and generalizability of the proposed algorithm, with all following errors less than 0.02 m.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Reinforcement learning-based motion control for snake robots in complex environments
    Zhang, Dong
    Ju, Renjie
    Cao, Zhengcai
    ROBOTICA, 2024, 42 (04) : 947 - 961
  • [2] A Reinforcement Learning-Based Strategy of Path Following for Snake Robots with an Onboard Camera
    Liu, Lixing
    Guo, Xian
    Fang, Yongchun
    SENSORS, 2022, 22 (24)
  • [3] Reinforcement learning-based framework for whale rendezvous via autonomous sensing robots
    Jadhav, Ninad
    Bhattacharya, Sushmita
    Vogt, Daniel
    Aluma, Yaniv
    Tonessen, Pernille
    Prabhakara, Akarsh
    Kumar, Swarun
    Gero, Shane
    Wood, Robert J.
    Gil, Stephanie
    SCIENCE ROBOTICS, 2024, 9 (95)
  • [4] Neural Network Model-Based Reinforcement Learning Control for AUV 3-D Path Following
    Ma, Dongfang
    Chen, Xi
    Ma, Weihao
    Zheng, Huarong
    Qu, Fengzhong
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 893 - 904
  • [5] Deep Reinforcement Learning-Based Control of Bicycle Robots on Rough Terrain
    Zhu, Xianjin
    Zheng, Xudong
    Deng, Yang
    Chen, Zhang
    Liang, Bin
    Liu, Yu
    2023 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS, ICCAR, 2023, : 103 - 108
  • [6] Learning-Based Split Unfolding Framework for 3-D mmW Radar Sparse Imaging
    Wei, Shunjun
    Zhou, Zichen
    Wang, Mou
    Zhang, Hao
    Shi, Jun
    Zhang, Xiaoling
    Fan, Ling
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [7] Modeling and Control of Hybrid 3-D Gaits of Snake-Like Robots
    Cao, Zhengcai
    Zhang, Dong
    Zhou, MengChu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (10) : 4603 - 4612
  • [8] Reinforcement Learning-Based Interference Control for Ultra-Dense Small Cells
    Zhang, Hailu
    Min, Minghui
    Xiao, Liang
    Liu, Sicong
    Cheng, Peng
    Peng, Mugen
    2018 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2018,
  • [9] Reinforcement Learning Based Multi-Layer Bayesian Control for Snake Robots in Cluttered Scenes
    Qu, Jessica Ziyu
    Qu, William Ziming
    Li, Li
    Jia, Yuanyuan
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 4702 - 4708
  • [10] A High-Performance Learning-Based Framework for Monocular 3-D Point Cloud Reconstruction
    Zamani, AmirHossein
    Ghaffari, Kamran
    Aghdam, Amir G.
    IEEE JOURNAL OF RADIO FREQUENCY IDENTIFICATION, 2024, 8 : 695 - 712