Path-Planning Method Based on Reinforcement Learning for Cooperative Two-Crane Lift Considering Load Constraint

被引:0
|
作者
An, Jianqi [1 ,2 ,3 ]
Ou, Huimin [1 ,2 ,3 ]
Wu, Min [1 ,2 ,3 ]
Chen, Xin [1 ,2 ,3 ]
机构
[1] China Univ Geosci, Sch Automat, Wuhan 430074, Peoples R China
[2] Hubei Key Lab Adv Control & Intelligent Automat Co, Wuhan 430074, Peoples R China
[3] Minist Educ, Engn Res Ctr Intelligent Technol Geoexplorat, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Cranes; Load modeling; Three-dimensional displays; Heuristic algorithms; Vehicle dynamics; Payloads; Gravity; Planning; Path planning; Mathematical models; Cooperative two-crane lift; lift-path planning; load distribution; Q-learning; reinforcement learning; ENVIRONMENTS; ROBOT;
D O I
10.1109/TSMC.2025.3539318
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a two-crane cooperative lift process, unreasonable load distribution on the two cranes may cause one of the cranes to overload, which may cause a dangerous overturn accident. Therefore, the load distribution should be taken as a constraint to yield a safe path for a cooperative lift. Besides, the load distribution on the two cranes varies with the changing postures of the cranes. However, the explicit relationship between the load distribution and the postures has not been reported. Therefore, this article first presents a relationship model between the postures of the two cranes and the load distribution on them. Next, a new path-planning method based on reinforcement learning is explained, which utilizes the load constraint as the optimization object in the cooperative two-crane lift. Simulation results show that the new method yields a short lift path with reasonable load distribution.
引用
收藏
页码:2913 / 2923
页数:11
相关论文
共 49 条
  • [31] A Fusion Method of Local Path Planning for Mobile Robots Based on LSTM Neural Network and Reinforcement Learning
    Guo, Na
    Li, Caihong
    Gao, Tengteng
    Liu, Guoming
    Li, Yongdi
    Wang, Di
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2021, 2021
  • [32] Crowd evacuation path planning and simulation method based on deep reinforcement learning and repulsive force field
    Wang, Hongyue
    Liu, Hong
    Li, Wenhao
    APPLIED INTELLIGENCE, 2025, 55 (04)
  • [33] A deep reinforcement learning based method for real-time path planning and dynamic obstacle avoidance
    Chen, Pengzhan
    Pei, Jiean
    Lu, Weiqing
    Li, Mingzhen
    NEUROCOMPUTING, 2022, 497 : 64 - 75
  • [34] A UAV Path Planning Method in Three-Dimensional Urban Airspace based on Safe Reinforcement Learning
    Li, Yan
    Zhang, Xuejun
    Zhu, Yuanjun
    Gao, Ziang
    2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,
  • [35] Local path planning method of the self-propelled model based on reinforcement learning in complex conditions
    Yang Y.
    Pang Y.
    Li H.
    Zhang R.
    Journal of Marine Science and Application, 2014, 13 (3) : 333 - 339
  • [36] Development of a New Intelligent Mobile Robot Path Planning Algorithm Based on Deep Reinforcement Learning Considering Pedestrian Traffic Rules
    Kubota, Koki
    Kobayashi, Kazuyuki
    Ohkubo, Tomoyuki
    Watanabe, Kajiro
    Sebi, Nashwan J.
    Tian, Kaiqiao
    Cheok, Ka C.
    2022 61ST ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS (SICE), 2022, : 628 - 632
  • [37] Soft Actor-Critic and Risk Assessment-Based Reinforcement Learning Method for Ship Path Planning
    Wang, Jue
    Ji, Bin
    Fu, Qian
    SUSTAINABILITY, 2024, 16 (08)
  • [38] Deep Policy-Gradient Based Path Planning and Reinforcement Cooperative Q-Learning Behavior of Multi-Vehicle Systems
    Afifi, Ahmed M.
    Alhosainy, Omar H.
    Elias, Catherine M.
    Shehata, Omar M.
    Morgan, Elsayed I.
    2019 IEEE INTERNATIONAL CONFERENCE OF VEHICULAR ELECTRONICS AND SAFETY (ICVES 19), 2019,
  • [39] Deep Reinforcement Learning-Based UAV Path Planning for Energy-Efficient Multitier Cooperative Computing in Wireless Sensor Networks
    Guo, Zhihui
    Chen, Hongbin
    Li, Shichao
    JOURNAL OF SENSORS, 2023, 2023
  • [40] Cooperative Coverage Path Planning for Multi-Mobile Robots Based on Improved K-Means Clustering and Deep Reinforcement Learning
    Ni, Jianjun
    Gu, Yu
    Tang, Guangyi
    Ke, Chunyan
    Gu, Yang
    ELECTRONICS, 2024, 13 (05)