Approximate dynamic programming for the military aeromedical evacuation dispatching, preemption-rerouting, and redeployment problem

被引:13
|
作者
Jenkins, Phillip R. [1 ]
Robbins, Matthew J. [1 ]
Lunday, Brian J. [1 ]
机构
[1] Air Force Inst Technol, Dept Operat Sci, 2950 Hobson Way, Wright Patterson AFB, OH 45433 USA
关键词
OR in defense; Approximate dynamic programming; Markov decision process; Support vector regression; Military MEDEVAC; DECISION-PROCESS MODEL; RELOCATION;
D O I
10.1016/j.ejor.2020.08.004
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Military medical planners must consider how aeromedical evacuation (MEDEVAC) assets will be utilized when preparing for and supporting combat operations. This research examines the MEDEVAC dispatching, preemption-rerouting, and redeployment (DPR) problem. The intent of this research is to determine high-quality DPR policies that improve the performance of United States Army MEDEVAC systems and ultimately increase the combat casualty survivability rate. A discounted, infinite-horizon Markov decision process (MDP) model of the MEDEVAC DPR problem is formulated and solved via an approximate dynamic programming (ADP) strategy that utilizes a support vector regression value function approximation scheme within an approximate policy iteration algorithmic framework. The objective is to maximize the expected total discounted reward attained by the system. The applicability of the MDP model is examined via a notional, representative planning scenario based on high-intensity combat operations to defend Azerbaijan against a notional aggressor. Computational experimentation is performed to determine how selected problem features and algorithmic features affect the quality of solutions attained by the ADP-generated DPR policies and to assess the efficacy of the proposed solution methodology. The results from the computational experiments indicate the ADP-generated policies significantly outperform the two benchmark policies considered. Moreover, the results reveal that the average service time of high-precedence, time-sensitive requests decreases when an ADP policy is adopted during high-intensity conflicts. As the rate at which requests enter the MEDEVAC system increases, the performance gap between the ADP policy and the first benchmark policy (i.e., the currently practiced, closest-available dispatching policy) increases substantially. Conversely, as the rate at which requests enter the system decreases, the ADP performance improvement over both benchmark policies decreases, indicating the ADP policy provides little-to-no benefit over a myopic approach (e.g., as utilized in the benchmark policies) when the intensity of a conflict is low. Ultimately, this research informs the development and implementation of future tactics, techniques, and procedures for military MEDEVAC operations. Published by Elsevier B.V.
引用
收藏
页码:132 / 143
页数:12
相关论文
共 50 条
  • [21] An approximate dynamic programming approach to solving a dynamic, stochastic multiple knapsack problem
    Perry, Thomas C.
    Hartman, Joseph C.
    [J]. INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, 2009, 16 (03) : 347 - 359
  • [22] An Approximate Dynamic Programming Approach to Vehicle Dispatching and Relocation using Time-Dependent Travel Times
    Huang, Yunping
    Zheng, Nan
    Liang, Enming
    Hsu, Shu-Chien
    Zhong, Renxin
    [J]. 2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 2652 - 2657
  • [23] An approximate dynamic programming approach to a communication constrained sensor management problem
    Williams, JL
    Fisher, JW
    Willsky, AS
    [J]. 2005 7TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), VOLS 1 AND 2, 2005, : 582 - 589
  • [24] Approximate dynamic programming for pickup and delivery problem with crowd-shipping
    Mousavi, Kianoush
    Bodur, Merve
    Cevik, Mucahit
    Roorda, Matthew J.
    [J]. TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2024, 187
  • [25] An Optimal Approximate Dynamic Programming Algorithm for the Lagged Asset Acquisition Problem
    Nascimento, Juliana M.
    Powell, Warren B.
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2009, 34 (01) : 210 - 237
  • [26] An approximate dynamic programming approach for the vehicle routing problem with stochastic demands
    Novoa, Clara
    Storer, Robert
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2009, 196 (02) : 509 - 515
  • [27] Graph Embedding and Approximate Dynamic Programming for the Reliable Shortest Path Problem
    Peng, Qihang
    Guo, Hongliang
    Hu, Chuan
    [J]. 2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 653 - 659
  • [28] An approximate dynamic programming approach for solving an air combat maneuvering problem
    Crumpacker, James B.
    Robbins, Matthew J.
    Jenkins, Phillip R.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 203
  • [29] APPROXIMATE SOLUTION OF THE PROBLEM OF OPTIMAL STANDBY BY THE METHOD OF DYNAMIC-PROGRAMMING
    SHURABURA, AE
    [J]. ENGINEERING CYBERNETICS, 1979, 17 (04): : 31 - 36
  • [30] A Unifying Approximate Dynamic Programming Model for the Economic Lot Scheduling Problem
    Adelman, Daniel
    Barz, Christiane
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2014, 39 (02) : 374 - 402