Reinforcement learning-based autonomous attacker to uncover computer network vulnerabilities

被引:0
|
作者
Mohamed Ahmed A. [1 ]
Nguyen T.T. [1 ]
Abdelrazek M. [2 ]
Aryal S. [1 ]
机构
[1] School of Information Technology, Deakin University, 75 Pigdons Rd, Geelong, 3216, VIC
[2] Applied Artificial Intelligence Institute, Deakin University, 221 Burwood Hwy, Melbourne, 3125, VIC
关键词
Deep neural network; Deep reinforcement learning; Network security; Network vulnerability; Off-policy;
D O I
10.1007/s00521-024-09668-0
中图分类号
学科分类号
摘要
In today’s intricate information technology landscape, the escalating complexity of computer networks is accompanied by a myriad of malicious threats seeking to compromise network components. To address these security challenges, we propose an approach that synergizes reinforcement learning and deep neural networks. Our method involves training autonomous cyber-agents to strategically attack network nodes, aiming to expose vulnerabilities and extract confidential information. We employ various off-policy deep reinforcement learning algorithms, including deep Q-network (DQN), double DQN, and dueling DQN, to train and evaluate these agents within two enterprise simulation networks provided by Microsoft. The simulations, modeled as Markov games between attack and defense, exclude human intervention. Results demonstrate that agents trained by double DQN and dueling DQN surpass baseline agents trained using traditional reinforcement learning and DQN methods. This approach not only enhances our understanding of network vulnerabilities but also lays the groundwork for future efforts to fortify computer network defense and security. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.
引用
收藏
页码:14341 / 14360
页数:19
相关论文
共 50 条
  • [1] Reinforcement Learning-Based Guidance of Autonomous Vehicles
    Clemmons, Joseph
    Jin, Yu-Fang
    2023 24TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED, 2023, : 496 - 501
  • [2] A Deep Q-Network Reinforcement Learning-Based Model for Autonomous Driving
    Ahmed, Marwa
    Lim, Chee Peng
    Nahavandi, Saeid
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 739 - 744
  • [3] Deep reinforcement learning-based autonomous parking design with neural network compute accelerators
    Ozeloglu, Alican
    Gurbuz, Ismihan Gul
    San, Ismail
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (09):
  • [4] Toward Autonomous Multi-UAV Wireless Network: A Survey of Reinforcement Learning-Based Approaches
    Bai, Yu
    Zhao, Hui
    Zhang, Xin
    Chang, Zheng
    Jantti, Riku
    Yang, Kun
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2023, 25 (04): : 3038 - 3067
  • [5] Reinforcement Learning-Based Predictive Control for Autonomous Electrified Vehicles
    Liu, Teng
    Yang, Chao
    Hu, Chuanzheng
    Wang, Hong
    Li, Li
    Cao, Dongpu
    Wang, Fei-Yue
    2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 185 - 190
  • [6] Development of a Simulator for Prototyping Reinforcement Learning-Based Autonomous Cars
    Holen, Martin
    Knausgard, Kristian Muri
    Goodwin, Morten
    INFORMATICS-BASEL, 2022, 9 (02):
  • [7] Deep reinforcement learning-based collision avoidance for an autonomous ship
    Chun, Do-Hyun
    Roh, Myung-Il
    Lee, Hye-Won
    Ha, Jisang
    Yu, Donghun
    OCEAN ENGINEERING, 2021, 234
  • [8] Reinforcement Learning-Based Autonomous Driving at Intersections in CARLA Simulator
    Gutierrez-Moreno, Rodrigo
    Barea, Rafael
    Lopez-Guillen, Elena
    Araluce, Javier
    Bergasa, Luis M.
    SENSORS, 2022, 22 (21)
  • [9] A review on reinforcement learning-based highway autonomous vehicle control
    Irshayyid, Ali
    Chen, Jun
    Xiong, Guojiang
    GREEN ENERGY AND INTELLIGENT TRANSPORTATION, 2024, 3 (04):
  • [10] PathSeeker: Exploring LLM Security Vulnerabilities with a Reinforcement Learning-Based Jailbreak Approach
    Lin, Zhihao
    Ma, Wei
    Zhou, Mingyi
    Zhao, Yanjie
    Wang, Haoyu
    Liu, Yang
    Wang, Jun
    Li, Li
    arXiv,