Deter and protect: crime modeling with multi-agent learning

被引:0
|
作者
Trevor R. Caskey
James S. Wasek
Anna Y. Franz
机构
[1] The George Washington University,
来源
关键词
Crime modeling; Agent-based modeling; Belief learning; game theory;
D O I
暂无
中图分类号
学科分类号
摘要
This paper presents a formal game-theoretic belief learning approach to model criminology’s routine activity theory (RAT). RAT states that for a crime to occur a motivated offender (criminal) and a desirable target (victim) must meet in space and time without the presence of capable guardianship (law enforcement). The novelty in using belief learning to model the dynamics of RAT’s offender, target, and guardian behaviors within an agent-based model is that the agents learn and adapt given observation of other agents’ actions without knowledge of the payoffs that drove the other agents’ choices. This is in contrast to other crime modeling research that has used reinforcement learning where the accumulated rewards gained from prior experiences are used to guide agent learning. This is an important distinction given the dynamics of RAT. It is the presence of the various agent types that provide opportunity for crime to occur, and not the potential for reward. Additionally, the belief learning approach presented fits the observed empirical data of case studies, producing statistically significant results with lower variance when compared to a reinforcement learning approach. Application of this new approach supports law enforcement in developing responses to crime problems and planning for the effects of displacement due to directed responses, thus deterring offenders and protecting the public through crime modeling with multi-agent learning.
引用
收藏
页码:155 / 169
页数:14
相关论文
共 50 条
  • [41] Learning to Share in Multi-Agent Reinforcement Learning
    Yi, Yuxuan
    Li, Ge
    Wang, Yaowei
    Lu, Zongqing
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [42] A novel multi-agent Q-learning algorithm in cooperative multi-agent system
    Ou, HT
    Zhang, WD
    Zhang, WY
    Xu, XM
    PROCEEDINGS OF THE 3RD WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-5, 2000, : 272 - 276
  • [43] Learning to Schedule in Multi-Agent Pathfinding
    Ahn, Kyuree
    Park, Heemang
    Park, Jinkyoo
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 7326 - 7332
  • [44] Auctions, evolution, and multi-agent learning
    Phelps, Steve
    Cai, Kai
    McBurney, Peter
    Niu, Jinzhong
    Parsons', Simon
    Sklar, Elizabeth
    ADAPTIVE AGENTS AND MULTI-AGENT SYSTEMS, 2008, 4865 : 188 - +
  • [45] Learning in BDI multi-agent systems
    Guerra-Hernández, A
    El Fallah-Seghrouchini, A
    Soldano, H
    COMPUTATIONAL LOGIC IN MULTI-AGENT SYSTEMS, 2004, 3259 : 218 - 233
  • [46] Multi-Agent Automated Machine Learning
    Wang, Zhaozhi
    Su, Kefan
    Zhang, Jian
    Jia, Huizhu
    Ye, Qixiang
    Xie, Xiaodong
    Lu, Zongqing
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11960 - 11969
  • [47] Hierarchical multi-agent reinforcement learning
    Mohammad Ghavamzadeh
    Sridhar Mahadevan
    Rajbala Makar
    Autonomous Agents and Multi-Agent Systems, 2006, 13 : 197 - 229
  • [48] Coordinated Multi-Agent Imitation Learning
    Le, Hoang M.
    Yue, Yisong
    Carr, Peter
    Lucey, Patrick
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [49] Learning communication for multi-agent systems
    Giles, CL
    Jim, KC
    INNOVATIVE CONCPTS FOR AGENT-BASED SYSTEMS, 2002, 2564 : 377 - 390
  • [50] Multi-Agent Learning from Learners
    Caliskan, Mine Melodi
    Chini, Francesco
    Maghsudi, Setareh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202