Deep Reinforcement Learning for Real-Time Assembly Planning in Robot-Based Prefabricated Construction

被引:11
|
作者
Zhu, Aiyu [1 ]
Dai, Tianhong [2 ]
Xu, Gangyan [3 ]
Pauwels, Pieter [1 ]
de Vries, Bauke [1 ]
Fang, Meng [4 ]
机构
[1] Eindhoven Univ Technol, Dept Informat Syst Built Environm, NL-5612 AZ Eindhoven, Netherlands
[2] Univ Aberdeen, Dept Comp Sci, Aberdeen AB24 3FX, Scotland
[3] Hong Kong Polytech Univ, Dept Aeronaut & Aviat Engn, Hong Kong, Peoples R China
[4] Univ Liverpool, Dept Comp Sci, Liverpool L69 3BX, England
基金
中国国家自然科学基金;
关键词
Planning; Robots; Prefabricated construction; Task analysis; Real-time systems; Safety; Decision making; assembly planning; deep reinforcement learning (DRL); automated construction; building information modelling (BIM); INFORMATION MODELING BIM; TECHNOLOGIES; FRAMEWORK; INTERNET; AUTOMATION; SIMULATION; PLATFORM; SYSTEM; SAFETY; CRANES;
D O I
10.1109/TASE.2023.3236805
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The adoption of robotics is promising to improve the efficiency, quality, and safety of prefabricated construction. Besides technologies that improve the capability of a single robot, the automated assembly planning for robots at construction sites is vital for further improving the efficiency and promoting robots into practices. However, considering the highly dynamic and uncertain nature of a construction environment, and the varied scenarios in different construction sites, it is always challenging to make appropriate and up-to-date assembly plans. Therefore, this paper proposes a Deep Reinforcement Learning (DRL) based method for automated assembly planning in robot-based prefabricated construction. Specifically, a re-configurable simulator for assembly planning is developed based on a Building Information Model (BIM) and an open game engine, which could support the training and testing of various optimization methods. Furthermore, the assembly planning problem is modelled as a Markov Decision Process (MDP) and a set of DRL algorithms are developed and trained using the simulator. Finally, experimental case studies in four typical scenarios are conducted, and the performance of our proposed methods have been verified, which can also serve as benchmarks for future research works within the community of automated construction. Note to Practitioners-This paper is conducted based on the comprehensive analysis of real-life assembly planning processes in prefabricated construction, and the methods proposed could bring many benefits to practitioners. Firstly, the proposed simulator could be easily re-configured to simulate diverse scenarios, which can be used to evaluate and verify the operations' optimization methods and new construction technologies. Secondly, the proposed DRL-based optimization methods can be directly adopted in various robot-based construction scenarios, and can also be tailored to support the assembly planning in traditional human-based or human-robot construction environments. Thirdly, the proposed DRL methods and their performance in the four typical scenarios can serve as benchmarks for proposing new advanced construction technologies and optimization methods in assembly planning.
引用
收藏
页码:1515 / 1526
页数:12
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Prefab Assembly Planning in Robot-based Prefabricated Construction
    Zhu, Aiyu
    Xu, Gangyan
    Pauwels, Pieter
    de Vries, Bauke
    Fang, Meng
    [J]. 2021 IEEE 17TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2021, : 1282 - 1288
  • [2] Real Time Path Planning of Robot using Deep Reinforcement Learning
    Raajan, Jeevan
    Srihari, P., V
    Satya, Jayadev P.
    Bhikkaji, B.
    Pasumarthy, Ramkrishna
    [J]. IFAC PAPERSONLINE, 2020, 53 (02): : 15602 - 15607
  • [3] Real-time planning and collision avoidance control method based on deep reinforcement learning
    Xu, Xinli
    Cai, Peng
    Cao, Yunlong
    Chu, Zhenzhong
    Zhu, Wenbo
    Zhang, Weidong
    [J]. OCEAN ENGINEERING, 2023, 281
  • [4] Real-time local path planning strategy based on deep distributional reinforcement learning
    Du, Shengli
    Zhu, Zexing
    Wang, Xuefang
    Han, Honggui
    Qiao, Junfei
    [J]. NEUROCOMPUTING, 2024, 599
  • [5] Deep Reinforcement Learning for Real-Time Trajectory Planning in UAV Networks
    Li, Kai
    Ni, Wei
    Tovar, Eduardo
    Guizani, Mohsen
    [J]. 2020 16TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC, 2020, : 958 - 963
  • [6] Real-time deep reinforcement learning based vehicle navigation
    Koh, Songsang
    Zhou, Bo
    Fang, Hui
    Yang, Po
    Yang, Zaili
    Yang, Qiang
    Guan, Lin
    Ji, Zhigang
    [J]. APPLIED SOFT COMPUTING, 2020, 96
  • [7] A deep reinforcement learning based method for real-time path planning and dynamic obstacle avoidance
    Chen, Pengzhan
    Pei, Jiean
    Lu, Weiqing
    Li, Mingzhen
    [J]. NEUROCOMPUTING, 2022, 497 : 64 - 75
  • [8] A Real-Time USV Path Planning Algorithm in Unknown Environment Based on Deep Reinforcement Learning
    Zhou, Zhi-Guo
    Zheng, Yi-Peng
    Liu, Kai-Yuan
    He, Xu
    Qu, Chong
    [J]. Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, 2019, 39 : 86 - 92
  • [9] Robot path planning based on deep reinforcement learning
    Long, Yinxin
    He, Huajin
    [J]. 2020 IEEE CONFERENCE ON TELECOMMUNICATIONS, OPTICS AND COMPUTER SCIENCE (TOCS), 2020, : 151 - 154
  • [10] Real-time Active Vision for a Humanoid Soccer Robot using Deep Reinforcement Learning
    Khatibi, Soheil
    Teimouri, Meisam
    Rezaei, Mahdi
    [J]. ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 742 - 751