Approximate dynamic programming for network recovery problems with stochastic demand

被引:16
|
作者
Ulusan, Aybike [1 ,2 ]
Ergun, Ozlem [1 ,2 ]
机构
[1] NE Univ, Dept Mech & Ind Engn, Boston, MA 02115 USA
[2] Northeastern Univ, Dept Mech & Ind Engn, Boston, MA 02115 USA
基金
美国国家科学基金会;
关键词
Network recovery problem; Stochastic networks; Demand uncertainty; Post-disaster response; Approximate dynamic programming; ROUTING PROBLEM; RESTORATION; OPTIMIZATION; RESILIENCE; MANAGEMENT; LOGISTICS; ENHANCE; SYSTEMS; DESIGN; MODEL;
D O I
10.1016/j.tre.2021.102358
中图分类号
F [经济];
学科分类号
02 ;
摘要
Immediately after a disruption, in order to minimize the negative impact inflicted on the society, its imperative to re-establish the interdicted critical services enabled by the infrastructure networks. In this paper, we study the stochastic network recovery problem that tackles the planning of restoration activities (considering limited resources) on interdicted infrastructure network links so that the pre-disruption critical service flows can be re-established as quickly as possible. As an illustrative case study, we consider a disaster scenario on a road infrastructure network that obstructs the flow of relief-aid commodities and search-and-rescue teams between critical service providing facilities and locations in need of these critical services. As in the case of many realistic applications, we consider the amount of demand for critical services as stochastic. First, we present a Markov decision process (MDP) formulation for the stochastic road network recovery problem (SRNRP), then we propose an approximate dynamic programming (ADP) approach to heuristically solve SRNRP. We develop basis functions to capture the important complex network interactions that can be used to approximate cost-to-go values for the MDP states. We conduct computational experiments on a set of small-scale randomly generated instances and demonstrate that the ADP approach provides near-optimal results regardless of the demand distribution and network topology. In order to develop a practical approach suitable for solving real world sized instances, we propose a framework where we first develop an ADP model and derive a policy on a spatially aggregated network of large scale instance. Next, we show the performance of this policy through computational testing on the large scale disaggregated network. Moreover, we provide managerial insights by assessing the importance of each basis function in the ADP model contributing to the recovery policies. We test this approach on a case study based on the Boston road infrastructure network. We observe that, as the urgency of re-establishing services increases or the resources become more scarce, the information gained from the network characteristics and short-term decisions should be the main driving factors to derive recovery policies. The results of all experiments strongly evidence the significance of utilizing the inherent network interactions and attributes to generate basis function sets for ADP models that yield high-quality recovery policies.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] An approximate method for solving stochastic two-stage programming problems
    Wang, ML
    Lansey, K
    Yakowitz, D
    [J]. ENGINEERING OPTIMIZATION, 2001, 33 (03) : 279 - 302
  • [32] Solving stochastic resource-constrained project scheduling problems by closed-loop approximate dynamic programming
    Li, Haitao
    Womer, Norman K.
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2015, 246 (01) : 20 - 33
  • [33] Approximate Dynamic Programming for Building Control Problems with Occupant Interactions
    Lee, Donghwan
    Lee, Seungjae
    Karava, Panagiota
    Hu, Jianghai
    [J]. 2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 2018, : 3945 - 3950
  • [34] An approximate dynamic programming approach to convex quadratic knapsack problems
    Hua, ZS
    Zhang, B
    Liang, L
    [J]. COMPUTERS & OPERATIONS RESEARCH, 2006, 33 (03) : 660 - 673
  • [35] Approximate dynamic programming for high dimensional resource allocation problems
    Powell, WB
    George, A
    Bouzaiene-Ayari, B
    Simao, HP
    [J]. Proceedings of the International Joint Conference on Neural Networks (IJCNN), Vols 1-5, 2005, : 2989 - 2994
  • [36] Adaptive network traffic control with approximate dynamic programming based on a non-homogeneous Poisson demand model
    Chen, Siqi
    Lu, Xing
    [J]. TRANSPORTMETRICA B-TRANSPORT DYNAMICS, 2024, 12 (01)
  • [37] Approximate stochastic dynamic programming for sensor scheduling to track multiple targets
    Li, Y.
    Krakow, L. W.
    Chong, E. K. P.
    Groom, K. N.
    [J]. DIGITAL SIGNAL PROCESSING, 2009, 19 (06) : 978 - 989
  • [38] NEW ALGORITHM OF DYNAMIC PROGRAMMING FOR STOCHASTIC PROBLEMS SOLUTION
    Dokuchaev, A. V.
    Kotenko, A. P.
    [J]. VESTNIK SAMARSKOGO GOSUDARSTVENNOGO TEKHNICHESKOGO UNIVERSITETA-SERIYA-FIZIKO-MATEMATICHESKIYE NAUKI, 2008, (02): : 203 - 209
  • [39] Dynamic programming for stochastic target problems and geometric flows
    Soner, HM
    Touzi, N
    [J]. JOURNAL OF THE EUROPEAN MATHEMATICAL SOCIETY, 2002, 4 (03) : 201 - 236
  • [40] Solving dynamic portfolio problems using stochastic programming
    Consigli, G
    Dempster, MAH
    [J]. ZEITSCHRIFT FUR ANGEWANDTE MATHEMATIK UND MECHANIK, 1997, 77 : S535 - S536