Path-Planning Method Based on Reinforcement Learning for Cooperative Two-Crane Lift Considering Load Constraint

被引:0
|
作者
An, Jianqi [1 ,2 ,3 ]
Ou, Huimin [1 ,2 ,3 ]
Wu, Min [1 ,2 ,3 ]
Chen, Xin [1 ,2 ,3 ]
机构
[1] China Univ Geosci, Sch Automat, Wuhan 430074, Peoples R China
[2] Hubei Key Lab Adv Control & Intelligent Automat Co, Wuhan 430074, Peoples R China
[3] Minist Educ, Engn Res Ctr Intelligent Technol Geoexplorat, Wuhan 430074, Peoples R China
基金
中国国家自然科学基金;
关键词
Cranes; Load modeling; Three-dimensional displays; Heuristic algorithms; Vehicle dynamics; Payloads; Gravity; Planning; Path planning; Mathematical models; Cooperative two-crane lift; lift-path planning; load distribution; Q-learning; reinforcement learning; ENVIRONMENTS; ROBOT;
D O I
10.1109/TSMC.2025.3539318
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In a two-crane cooperative lift process, unreasonable load distribution on the two cranes may cause one of the cranes to overload, which may cause a dangerous overturn accident. Therefore, the load distribution should be taken as a constraint to yield a safe path for a cooperative lift. Besides, the load distribution on the two cranes varies with the changing postures of the cranes. However, the explicit relationship between the load distribution and the postures has not been reported. Therefore, this article first presents a relationship model between the postures of the two cranes and the load distribution on them. Next, a new path-planning method based on reinforcement learning is explained, which utilizes the load constraint as the optimization object in the cooperative two-crane lift. Simulation results show that the new method yields a short lift path with reasonable load distribution.
引用
收藏
页码:2913 / 2923
页数:11
相关论文
共 49 条
  • [1] Calculation Method of Load Distribution for Two-crane Cooperative Lift
    Shen, Xiaoling
    An, Jianqi
    Terano, Takao
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 7851 - 7855
  • [2] A COLREGs-based path-planning method for collision avoidance considering path cost through reinforcement learning
    Song, Wanping
    Chen, Zengqiang
    Sun, Mingwei
    Wang, Yongshuai
    Sun, Qinglin
    OCEAN ENGINEERING, 2025, 325
  • [3] Integrated reinforcement and imitation learning for tower crane lift path planning
    Wang, Zikang
    Huang, Chun
    Yao, Boqiang
    Li, Xin
    AUTOMATION IN CONSTRUCTION, 2024, 165
  • [4] Reinforcement Learning Based Path Planning Method for Underactuated AUV with Sonar Constraint
    Pang, Zhouqi
    Lin, Xiaobo
    Hao, Chengpeng
    Hou, Chaohuan
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2382 - 2387
  • [5] Research on Path-planning of Manipulator based on Multi-agent Reinforcement Learning
    Tong, Liang
    FRONTIERS OF MANUFACTURING AND DESIGN SCIENCE, PTS 1-4, 2011, 44-47 : 2116 - 2120
  • [6] A Path-Planning Method Considering Environmental Disturbance Based on VPF-RRT*
    Chen, Zhihao
    Yu, Jiabin
    Zhao, Zhiyao
    Wang, Xiaoyi
    Chen, Yang
    DRONES, 2023, 7 (02)
  • [7] A Lightweight Reinforcement-Learning-Based Real-Time Path-Planning Method for Unmanned Aerial Vehicles
    Xi, Meng
    Dai, Huiao
    He, Jingyi
    Li, Wenjie
    Wen, Jiabao
    Xiao, Shuai
    Yang, Jiachen
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 21061 - 21071
  • [8] A Reinforcement Learning-Based Path Planning Considering Degree of Observability
    Cho, Yong Hyeon
    Park, Chan Gook
    PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON ARTIFICIAL LIFE AND ROBOTICS (ICAROB2020), 2020, : 502 - 505
  • [9] Learning-Based Picking and Placing Configuration Sampler for Mobile Crane Lift Path Planning
    Lin, Yuanshan
    Kang, Yuwei
    Jin, Zhaoyi
    Wang, Junyi
    Zheng, Xinyu
    Wang, Fang
    Wu, Libo
    Wu, Gang
    Li, Zhijun
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2025, 39 (03)
  • [10] A UAV Path Planning Method Based on Deep Reinforcement Learning
    Li, Yibing
    Zhang, Sitong
    Ye, Fang
    Jiang, Tao
    Li, Yingsong
    2020 IEEE USNC-CNC-URSI NORTH AMERICAN RADIO SCIENCE MEETING (JOINT WITH AP-S SYMPOSIUM), 2020, : 93 - 94