Improving defensive air battle management by solving a stochastic dynamic assignment problem via approximate dynamic programming

被引:4
|
作者
Liles, Joseph M. [1 ]
Robbins, Matthew J. [1 ]
Lunday, Brian J. [1 ]
机构
[1] US Air Force, Dept Operat Sci, Inst Technol, 2950 Hobson Way, Wright Patterson AFB, OH 45433 USA
关键词
OR in defense; Air battle management; Dynamic assignment problem; Markov decision process; Approximate dynamic programming; ALGORITHMS;
D O I
10.1016/j.ejor.2022.06.031
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Military air battle managers face several challenges when directing operations during quickly evolving combat scenarios. These scenarios require rapid assignment decisions to engage moving targets having dynamic flight paths. In defensive operations, the success of a sequence of air battle management de-cisions is reflected by the friendly force's ability to maintain air superiority and defend friendly assets. We develop a Markov decision process (MDP) model of a stochastic dynamic assignment problem, named the Air Battle Management Problem (ABMP), wherein a set of unmanned combat aerial vehicles (UCAV) must defend an asset from cruise missiles arriving stochastically over time. Attaining an exact solution using traditional dynamic programming techniques is computationally intractable. Hence, we utilize an approximate dynamic programming (ADP) technique known as approximate policy iteration with least squares temporal differences (API-LSTD) learning to find high-quality solutions to the ABMP. We create a simulation environment in conjunction with a generic yet representative combat scenario to illustrate how the ADP solution compares in quality to a reasonable, closest-intercept benchmark policy. Our API-LSTD policy improves mean success rate by 2.8% compared to the benchmark policy and offers an 81.7% increase in the frequency with which the policy performs perfectly. Moreover, we find the increased suc-cess rate of the ADP policy is, on average, equivalent to the success rate attained by the benchmark policy when using a 20% faster UCAV. These results inform military force management and defense acquisition decisions and aid in the development of more effective tactics, techniques, and procedures. Published by Elsevier B.V.
引用
收藏
页码:1435 / 1449
页数:15
相关论文
共 50 条
  • [1] An approximate dynamic programming approach to solving a dynamic, stochastic multiple knapsack problem
    Perry, Thomas C.
    Hartman, Joseph C.
    [J]. INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, 2009, 16 (03) : 347 - 359
  • [2] An approximate dynamic programming approach for solving an air combat maneuvering problem
    Crumpacker, James B.
    Robbins, Matthew J.
    Jenkins, Phillip R.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 203
  • [3] Solving the dynamic ambulance relocation and dispatching problem using approximate dynamic programming
    Schmid, Verena
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2012, 219 (03) : 611 - 621
  • [4] Solving the flight gate assignment problem using dynamic programming
    Florian Jaehn
    [J]. Zeitschrift für Betriebswirtschaft, 2010, 80 (10): : 1027 - 1039
  • [5] Solving quadratic programming problem via dynamic programming approach
    Saber, Naghada
    Sulaiman, Nejmaddin
    [J]. INTERNATIONAL JOURNAL OF NONLINEAR ANALYSIS AND APPLICATIONS, 2022, 13 (02): : 473 - 478
  • [6] An Approximate Dynamic Programming Approach to Dynamic Stochastic Matching
    You, Fan
    Vossen, Thomas
    [J]. INFORMS JOURNAL ON COMPUTING, 2024, 36 (04) : 1006 - 1022
  • [7] Approximate dynamic programming for stochastic reachability
    Kariotoglou, Nikolaos
    Summers, Sean
    Summers, Tyler
    Kamgarpour, Maryam
    Lygeros, John
    [J]. 2013 EUROPEAN CONTROL CONFERENCE (ECC), 2013, : 584 - 589
  • [8] An approximate dynamic programming approach for the vehicle routing problem with stochastic demands
    Novoa, Clara
    Storer, Robert
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2009, 196 (02) : 509 - 515
  • [9] An approximate dynamic programming approach to solving dynamic oligopoly models
    Farias, Vivek
    Saure, Denis
    Weintraub, Gabriel Y.
    [J]. RAND JOURNAL OF ECONOMICS, 2012, 43 (02): : 253 - 282
  • [10] Price Management in Resource Allocation Problem with Approximate Dynamic Programming
    Forootani, Ali
    Tipaldi, Massimo
    Liuzza, Davide
    Glielmo, Luigi
    [J]. 2018 EUROPEAN CONTROL CONFERENCE (ECC), 2018, : 851 - 856