A Deep Reinforcement Learning based Approach for Bridge Health Maintenance

被引:0
|
作者
Gadiraju, Divija Swetha [1 ]
Muthiah, Surya Rajalakshmi [1 ]
Khazanchi, Deepak [1 ]
机构
[1] US Army Corps Engineers, Engn Res & Dev Ctr ERDC, Washington, DC 39180 USA
关键词
Applied Artificial Intelligence; Deep Reinforcement Learning; Neural Networks; Advanced Machine Learning; Structural Health Monitoring;
D O I
10.1109/CSCI62032.2023.00014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work proposes an DRL based solution to the bridge health monitoring problem to determine cost-effective and hazard-reducing repair methods for deteriorated bridges. There is a need for efficient maintenance schedules that utilize the available offline data. To address this, a deep reinforcement learning (DRL)-based model is introduced to enhance Nebraska bridge maintenance throughout the bridge life-cycle. The DRL agent utilizes the provided offline data to predict optimal maintenance activities. By considering budget limitations, the DRL algorithm generates an optimized maintenance plan, aiming for maximum cost-effectiveness. This approach incorporates structural degradation and the impact of maintenance operations over time by employing probabilistic models to simulate the stochastic process. We propose an algorithm using reinforcement learning for bridge maintenance. We leverage Double Deep Q-Learning Network with Dueling Architecture called D3QN for our DRL approach. The results show that the best maintenance techniques learned are within the specified budget limits and maximize the life-cycle cost-effectiveness of maintenance operations. Furthermore, the proposed D3QN outperforms traditional techniques like Dueling Deep Q networks (DDQN) and heuristic algorithms. D3QN can achieve faster reward convergence and can achieve 75% better life cycle cost utilization compared to the heuristic algorithm.
引用
下载
收藏
页码:43 / 48
页数:6
相关论文
共 50 条
  • [31] A Deep Reinforcement Learning-Based Approach for Android GUI Testing
    Gao, Yuemeng
    Tao, Chuanqi
    Guo, Hongjing
    Gao, Jerry
    WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 262 - 276
  • [32] The Bottleneck Simulator: A Model-Based Deep Reinforcement Learning Approach
    Serban, Iulian Vlad
    Sankar, Chinnadhurai
    Pieper, Michael
    Pineau, Joelle
    Bengio, Yoshua
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2020, 69 : 571 - 612
  • [33] The bottleneck simulator: A model-based deep reinforcement learning approach
    Serban, Iulian Vlad
    Sankar, Chinnadhurai
    Pieper, Michael
    Pineau, Joelle
    Bengio, Yoshua
    Journal of Artificial Intelligence Research, 2020, 69 : 571 - 612
  • [34] A Deep Reinforcement Learning Based Approach for Home Energy Management System
    Li, Hepeng
    Wan, Zhiqiang
    He, Haibo
    2020 IEEE POWER & ENERGY SOCIETY INNOVATIVE SMART GRID TECHNOLOGIES CONFERENCE (ISGT), 2020,
  • [35] Deep Q-Learning Based Reinforcement Learning Approach for Network Intrusion Detection
    Alavizadeh, Hooman
    Alavizadeh, Hootan
    Jang-Jaccard, Julian
    COMPUTERS, 2022, 11 (03)
  • [36] Optimal policy for structure maintenance: A deep reinforcement learning framework
    Wei, Shiyin
    Bao, Yuequan
    Li, Hui
    STRUCTURAL SAFETY, 2020, 83 (83)
  • [37] Prescriptive Maintenance of Freight Vehicles using Deep Reinforcement Learning
    Tham, Chen-Khong
    Liu, Weihao
    Chattopadhyay, Rajarshi
    2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [38] Deep reinforcement learning for optimal planning of assembly line maintenance
    Geurtsen, M.
    Adan, I.
    Atan, Z.
    JOURNAL OF MANUFACTURING SYSTEMS, 2023, 69 : 170 - 188
  • [39] A Deep Reinforcement Learning Approach for Global Routing
    Liao, Haiguang
    Zhang, Wentai
    Dong, Xuliang
    Poczos, Barnabas
    Shimada, Kenji
    Kara, Levent Burak
    JOURNAL OF MECHANICAL DESIGN, 2020, 142 (06)
  • [40] A Deep Reinforcement Learning Approach for Shared Caching
    Trinadh, Pruthvi
    Thomas, Anoop
    2021 NATIONAL CONFERENCE ON COMMUNICATIONS (NCC), 2021, : 492 - 497