Deep Reinforcement Learning-Enabled Bridge Management Considering Asset and Network Risks

被引:7
|
作者
Yang, David Y. [1 ]
机构
[1] Portland State Univ, Dept Civil & Environm Engn, 1930 SW 4th Ave, Portland, OR 97201 USA
关键词
Deep reinforcement learning; Network flow capacity; Bridge management system; Transportation asset management;
D O I
10.1061/(ASCE)IS.1943-555X.0000704
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Bridges deteriorate over time due to various environmental and mechanical stressors. Deterioration is a significant risk to bridge owners (asset risk) and the traveling public (network risk). To tackle this issue, transportation agencies carry out bridge management under limited resources to preserve bridge conditions and control the risks of bridge failure. Nonetheless, existing network-level analysis for bridge management cannot explicitly consider the effects of preservation actions on network risk, measured directly by functionality indicators such as network capacity. In this paper, a novel method based on deep reinforcement learning is proposed to devise network-level preservation policies that can reflect bridge importance to network functionality. The proposed method is based on the proximal policy optimization algorithm adapted for bridge management problems and improved via distributed computing and architecture. The method is applied to an illustrative bridge network. The results indicate that the proposed method can produce significantly better preservation policies in terms of minimizing long-term costs that include asset and network risks. The devised policies are also investigated in depth to allow for transparent interpretation and easy integration with existing bridge management systems.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Deep VULMAN: A deep reinforcement learning-enabled cyber vulnerability management framework
    Hore, Soumyadeep
    Shah, Ankit
    Bastian, Nathaniel D.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 221
  • [2] Efficient Deep Reinforcement Learning-Enabled Recommendation
    Pang, Guangyao
    Wang, Xiaoming
    Wang, Liang
    Hao, Fei
    Lin, Yaguang
    Wan, Pengfei
    Min, Geyong
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (02): : 871 - 886
  • [3] Transfer Deep Reinforcement Learning-Enabled Energy Management Strategy for Hybrid Tracked Vehicle
    Guo, Xiaowei
    Liu, Teng
    Tang, Bangbei
    Tang, Xiaolin
    Zhang, Jinwei
    Tan, Wenhao
    Jin, Shufeng
    [J]. IEEE ACCESS, 2020, 8 : 165837 - 165848
  • [4] A deep reinforcement learning-enabled dynamic redeployment system for mobile ambulances
    Ji, Shenggong
    Zheng, Yu
    Wang, Zhaoyuan
    Li, Tianrui
    [J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019, 3 (01):
  • [5] Deep Reinforcement Learning-Enabled Secure Visible Light Communication Against Eavesdropping
    Xiao, Liang
    Sheng, Geyi
    Liu, Sicong
    Dai, Huaiyu
    Peng, Mugen
    Song, Jian
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 2019, 67 (10) : 6994 - 7005
  • [6] Deep Reinforcement Learning for Resource Management in Blockchain-Enabled Federated Learning Network
    Hieu, Nguyen Quang
    Tran, The Anh
    Nguyen, Cong Luong
    Niyato, Dusit
    Kim, Dong In
    Elmroth, Erik
    [J]. IEEE Networking Letters, 2022, 4 (03): : 137 - 141
  • [7] Reinforcement Learning-Enabled Seamless Microgrids Interconnection
    Li, Yan
    Xu, Zihao
    Bowes, Kenneth B.
    Ren, Lingyu
    [J]. 2021 IEEE POWER & ENERGY SOCIETY GENERAL MEETING (PESGM), 2021,
  • [8] Application of deep reinforcement learning in asset liability management
    Wekwete, Takura Asael
    Kufakunesu, Rodwell
    van Zyl, Gusti
    [J]. Intelligent Systems with Applications, 2023, 20
  • [9] Reinforcement Learning-Enabled Electric Vehicle Load Forecasting for Grid Energy Management
    Zulfiqar, M.
    Alshammari, Nahar F.
    Rasheed, M. B.
    [J]. MATHEMATICS, 2023, 11 (07)
  • [10] Assured learning-enabled autonomy: A metacognitive reinforcement learning framework
    Mustafa, Aquib
    Mazouchi, Majid
    Nageshrao, Subramanya
    Modares, Hamidreza
    [J]. INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2021, 35 (12) : 2348 - 2371