Deep Reinforcement Learning to Assist Command and Control

被引:2
|
作者
Park, Song Jun [1 ]
Vindiola, Manuel M. [1 ]
Logie, Anne C. [1 ]
Narayanan, Priya [1 ]
Davies, Jared [2 ]
机构
[1] DEVCOM Army Res Lab, Aberdeen Proving Ground, MD 21005 USA
[2] Cole Engn Serv Inc, Orlando, FL USA
关键词
deep reinforcement learning; command and control; COA simulation engine; LEVEL;
D O I
10.1117/12.2618907
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-domain operations drastically increase the scale and speed required to generate, evaluate, and disseminate command and control (C2) directives. In this work we evaluate the effectiveness of using reinforcement learning (RL) within an Army C2 system to design an artificial intelligence (AI) agent that accelerates the commander and staff's decision making process. Leveraging RL's superior ability to explore and exploit produces novel strategies that widen a commander's decision space without increasing cognitive burden. Integrating RL into an efficient course of action war-gaming simulator and training hundreds of thousands of simulated battles using the DoD supercomputing resources generated an AI that produces acceptable strategic actions during a simulated operation. Moreover, this approach played an unexpected but significant role in strengthening the underlying wargame simulation engine by discovering and exploiting weaknesses in its design. This highlights a future role for the use of RL to test and improve DoD systems during their development.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Deep Reinforcement Learning based Command Control System for Automating Fault Diagnosis
    Yamauchi, Hiroshi
    Kimura, Tatsuaki
    [J]. 2023 19TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT, CNSM, 2023,
  • [2] Physiological control for left ventricular assist devices based on deep reinforcement learning
    Fernandez-Zapico, Diego
    Peirelinck, Thijs
    Deconinck, Geert
    Donker, Dirk W.
    Fresiello, Libera
    [J]. ARTIFICIAL ORGANS, 2024,
  • [3] Autonomous Command and Control for Earth-Observing Satellites using Deep Reinforcement Learning
    Harris, Andrew
    Naik, Kedar
    [J]. 2023 IEEE AEROSPACE CONFERENCE, 2023,
  • [4] Discovering Command and Control Channels Using Reinforcement Learning
    Wang, Cheng
    Kakkar, Akshay
    Redino, Chris
    Rahman, Abdul
    Ajinsyam, S.
    Clark, Ryan
    Radke, Daniel
    Cody, Tyler
    Huang, Lanxiao
    Bowen, Edward
    [J]. SOUTHEASTCON 2023, 2023, : 685 - 692
  • [5] Adversarial attacks on reinforcement learning agents for command and control
    Dabholkar, Ahaan
    Hare, James Z.
    Mittrick, Mark
    Richardson, John
    Waytowich, Nicholas
    Narayanan, Priya
    Bagchi, Saurabh
    [J]. JOURNAL OF DEFENSE MODELING AND SIMULATION-APPLICATIONS METHODOLOGY TECHNOLOGY-JDMS, 2024,
  • [6] Deep Reinforcement Learning for Formation Control
    Aykin, Can
    Knopp, Martin
    Dieopold, Klaus
    [J]. 2018 27TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2018), 2018, : 1124 - 1128
  • [7] Deep Reinforcement Learning for Contagion Control
    Benalcazar, Diego R.
    Enyioha, Chinwendu
    [J]. 5TH IEEE CONFERENCE ON CONTROL TECHNOLOGY AND APPLICATIONS (IEEE CCTA 2021), 2021, : 162 - 167
  • [8] Research of Command Entity Intelligent Decision Model based on Deep Reinforcement Learning
    Yin, Changsheng
    Yang, Ruopeng
    Zou, Xiaofei
    [J]. PROCEEDINGS OF 2018 5TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND INTELLIGENCE SYSTEMS (CCIS), 2018, : 552 - 556
  • [9] Ayudante: A Deep Reinforcement Learning Approach to Assist Persistent Memory Programming
    Huang, Hanxian
    Wang, Zixuan
    Kim, Juno
    Swanson, Steven
    [J]. PROCEEDINGS OF THE 2021 USENIX ANNUAL TECHNICAL CONFERENCE, 2021, : 789 - 804
  • [10] Control of chaotic systems by deep reinforcement learning
    Bucci, M. A.
    Semeraro, O.
    Allauzen, A.
    Wisniewski, G.
    Cordier, L.
    Mathelin, L.
    [J]. PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2019, 475 (2231):