Applying reinforcement learning to an insurgency Agent-based Simulation

被引:8
|
作者
Collins, Andrew [1 ]
Sokolowski, John [1 ]
Banks, Catherine [1 ]
机构
[1] Virginia Modeling Anal & Simulat Ctr, 1030 Univ Blvd, Suffolk, VA 23435 USA
关键词
Agent-based Modeling and Simulation; reinforcement learning; insurgency;
D O I
10.1177/1548512913501728
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
A requirement of an Agent-based Simulation (ABS) is that the agents must be able to adapt to their environment. Many ABSs achieve this adaption through simple threshold equations due to the complexity of incorporating more sophisticated approaches. Threshold equations are when an agent behavior changes because a numeric property of the agent goes above or below a certain threshold value. Threshold equations do not guarantee that the agents will learn what is best for them. Reinforcement learning is an artificial intelligence approach that has been extensively applied to multiagent systems but there is very little in the literature on its application to ABS. Reinforcement learning has previously been applied to discrete-event simulations with promising results; thus, reinforcement learning is a good candidate for use within an Agent-based Modeling and Simulation (ABMS) environment. This paper uses an established insurgency case study to show some of the consequences of applying reinforcement learning to ABMS, for example, determining whether any actual learning has occurred. The case study was developed using the Repast Simphony software package.
引用
收藏
页码:353 / 364
页数:12
相关论文
共 50 条
  • [1] MontiSim: Agent-Based Simulation for Reinforcement Learning of Autonomous Driving
    Hofer, Tristan
    Hoppe, Mattis
    Kusmenko, Evgeny
    Rumpe, Bernhard
    [J]. 2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 2634 - 2639
  • [2] Simulation-based optimization of radiotherapy: Agent-based modeling and reinforcement learning
    Jalalimanesh, Ammar
    Haghighi, Hamidreza Shahabi
    Ahmadi, Abbas
    Soltani, Madjid
    [J]. MATHEMATICS AND COMPUTERS IN SIMULATION, 2017, 133 : 235 - 248
  • [3] Applying agent-based simulation in industrial ecology
    Kraines, S
    Wallace, D
    [J]. JOURNAL OF INDUSTRIAL ECOLOGY, 2006, 10 (1-2) : 15 - 18
  • [4] Data Assimilation Technique for Social Agent-Based Simulation by using Reinforcement Learning
    Kang, Dong-oh
    Bae, Jang Won
    Lee, Chunhee
    Jung, Joon-Young
    Paik, Euihyun
    [J]. PROCEEDINGS OF THE 2018 IEEE/ACM 22ND INTERNATIONAL SYMPOSIUM ON DISTRIBUTED SIMULATION AND REAL TIME APPLICATIONS (DS-RT), 2018, : 220 - 221
  • [5] Agent-based simulation to analyze business office activities using reinforcement learning
    Kenjo, Yukinao
    Yamada, Takashi
    Terano, Takao
    [J]. AGENT-BASED APPROACHES IN ECONOMIC AND SOCIAL COMPLEX SYSTEMS V: POST-PROCEEDINGS OF THE AESCS INTERNATIONAL WORKSHOP 2007, 2009, : 55 - 66
  • [6] MADES: A UNIFIED FRAMEWORK FOR INTEGRATING AGENT-BASED SIMULATION WITH MULTI-AGENT REINFORCEMENT LEARNING
    Wang, Xiaohan
    Zhang, Lin
    Laili, Yuanjun
    Xie, Kunyu
    Lu, Han
    Zhao, Chun
    [J]. PROCEEDINGS OF THE 2021 ANNUAL MODELING AND SIMULATION CONFERENCE (ANNSIM'21), 2020,
  • [7] Agent-based simulation of group learning
    Spoelstra, Maartje
    Sklar, Elizabeth
    [J]. MULTI-AGENT-BASED SIMULATION VIII, 2008, 5003 : 69 - +
  • [8] An Agent-Based Simulation Modeling with Deep Reinforcement Learning for Smart Traffic Signal Control
    Jang, Ingook
    Kim, Donghun
    Lee, Donghun
    Son, Youngsung
    [J]. 2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), 2018, : 1028 - 1030
  • [9] Application of reinforcement learning for agent-based production scheduling
    Wang, YC
    Usher, JM
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2005, 18 (01) : 73 - 82
  • [10] AGENT-BASED SIMULATION OF SOCIAL LEARNING IN CRIMINOLOGY
    Bosse, Tibor
    Gerritsen, Charlotte
    Klein, Michel C. A.
    [J]. ICAART 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, 2009, : 5 - +