A probabilistic deep reinforcement learning approach for optimal monitoring of a building adjacent to deep excavation

被引:17
|
作者
Pan, Yue [1 ]
Qin, Jianjun [1 ,4 ]
Zhang, Limao [2 ]
Pan, Weiqiang [3 ]
Chen, Jin-Jian [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Naval Architecture Ocean & Civil Engn, State Key Lab Ocean Engn, Shanghai Key Lab Digital Maintenance Bldg & Infras, Shanghai, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Civil & Hydraul Engn, Wuhan, Hubei, Peoples R China
[3] Shanghai Tunnel Engn Co Ltd, Shanghai, Peoples R China
[4] Shanghai Jiao Tong Univ, Dept Civil Engn, 800 Dongchuan Rd, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
RELIABILITY; INFORMATION;
D O I
10.1111/mice.13021
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
During a deep excavation project, monitoring the structural health of the adjacent buildings is crucial to ensure safety. Therefore, this study proposes a novel probabilistic deep reinforcement learning (PDRL) framework to optimize the monitoring plan to minimize the cost and excavation-induced risk. First, a Bayesian-bi-directional general regression neural network is built as a probabilistic model to describe the relationship between the ground settlement of the foundation pit and the safety state of the adjacent building, along with the actions in a dynamic manner. Subsequently, a double deep Q-network method, which can capture the realistic features of the excavation management problem, is trained to form a closed decision loop for continuous learning of monitoring strategies. Finally, the proposed PDRL approach is applied to a real-world deep excavation case in No. 14 Shanghai Metro. This approach can estimate the time-variant probability of damage occurrence and maintenance actions and update the state of the adjacent building. According to the strategy proposed via PDRL, monitoring of the adjacent buildings begins in the middle stage rather than on the first day of the excavation project if there is full confidence in the quality of the monitoring data. When the uncertainty level of data rises, the starting day might shift to an earlier date. It is worth noting that the proposed PDRL method is adequately robust to address the uncertainties embedded in the environment and model, thus contributing to optimizing the monitoring plan for achieving cost-effectiveness and risk mitigation.
引用
收藏
页码:656 / 678
页数:23
相关论文
共 50 条
  • [31] Experience of a deep excavation adjacent to critical structures
    Tang, SK
    Lim, TL
    Lim, PH
    Tan, KC
    Leong, PL
    [J]. GEOTECHNICAL ENGINEERING MEETING SOCIETY'S NEEDS, VOLS 1 AND 2, PROCEEDINGS, 2001, : 465 - 470
  • [32] Advanced Building Control via Deep Reinforcement Learning
    Jia, Ruoxi
    Jin, Ming
    Sun, Kaiyu
    Hong, Tianzhen
    Spanos, Costas
    [J]. INNOVATIVE SOLUTIONS FOR ENERGY TRANSITIONS, 2019, 158 : 6158 - 6163
  • [33] Building Decision Forest via Deep Reinforcement Learning
    Hua, Hongzhi
    Wen, Guixuan
    Wu, Kaigui
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [34] Building Safe and Stable DNN Controllers using Deep Reinforcement Learning and Deep Imitation Learning
    He, Xudong
    [J]. 2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 775 - 784
  • [35] DEEP VESSEL TRACKING: A GENERALIZED PROBABILISTIC APPROACH VIA DEEP LEARNING
    Wu, Aaron
    Xu, Ziyue
    Gao, Mingchen
    Buty, Mario
    Mollura, Daniel J.
    [J]. 2016 IEEE 13TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2016, : 1363 - 1367
  • [36] The Numerical Analysis of Influence on the Settlement of Adjacent Building Induced by the Deep Foundation Pit Excavation
    Zhu Changzhi
    Zhang Zhiyu
    [J]. CIVIL ENGINEERING IN CHINA - CURRENT PRACTICE AND RESEARCH REPORT, 2010, : 26 - 29
  • [37] State Representation Learning With Adjacent State Consistency Loss for Deep Reinforcement Learning
    Zhao, Tianyu
    Zhao, Jian
    Zhou, Wengang
    Zhou, Yun
    Li, Houqiang
    [J]. IEEE MULTIMEDIA, 2021, 28 (03) : 117 - 127
  • [38] Deep Reinforcement Learning Approach for Cyberattack Detection
    Tareq, Imad
    Elbagoury, Bassant Mohamed
    El-Regaily, Salsabil Amin
    El-Horbaty, El-Sayed M.
    [J]. INTERNATIONAL JOURNAL OF ONLINE AND BIOMEDICAL ENGINEERING, 2024, 20 (05) : 15 - 30
  • [39] A Deep Reinforcement Learning Approach for Global Routing
    Liao, Haiguang
    Zhang, Wentai
    Dong, Xuliang
    Poczos, Barnabas
    Shimada, Kenji
    Kara, Levent Burak
    [J]. JOURNAL OF MECHANICAL DESIGN, 2020, 142 (06)
  • [40] A Deep Reinforcement Learning Approach for Shared Caching
    Trinadh, Pruthvi
    Thomas, Anoop
    [J]. 2021 NATIONAL CONFERENCE ON COMMUNICATIONS (NCC), 2021, : 492 - 497