Deep Reinforcement Learning for Intelligent Penetration Testing Path Design

被引:3
|
作者
Yi, Junkai [1 ]
Liu, Xiaoyan [2 ]
机构
[1] Beijing Informat Sci & Technol Univ, Sch Automat, Key Lab Modern Measurement & Control Technol, Minist Educ, Beijing 100096, Peoples R China
[2] Beijing Informat Sci & Technol Univ, Sch Automat, Beijing 100192, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 16期
关键词
deep reinforcement learning; penetration testing; attack graph; improved DQN algorithm; attack path planning;
D O I
10.3390/app13169467
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Penetration testing is an important method to evaluate the security degree of a network system. The importance of penetration testing attack path planning lies in its ability to simulate attacker behavior, identify vulnerabilities, reduce potential losses, and continuously improve security strategies. By systematically simulating various attack scenarios, it enables proactive risk assessment and the development of robust security measures. To address the problems of inaccurate path prediction and difficult convergence in the training process of attack path planning, an algorithm which combines attack graph tools (i.e., MulVAL, multi-stage vulnerability analysis language) and the double deep Q network is proposed. This algorithm first constructs an attack tree, searches paths in the attack graph, and then builds a transfer matrix based on depth-first search to obtain all reachable paths in the target system. Finally, the optimal path for target system attack path planning is obtained by using the deep double Q network (DDQN) algorithm. The MulVAL double deep Q network(MDDQN) algorithm is tested in different scale penetration testing environments. The experimental results show that, compared with the traditional deep Q network (DQN) algorithm, the MDDQN algorithm is able to reach convergence faster and more stably and improve the efficiency of attack path planning.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Reinforcement Learning for Intelligent Penetration Testing
    Ghanem, Mohamed C.
    Chen, Thomas M.
    [J]. PROCEEDINGS OF THE 2018 SECOND WORLD CONFERENCE ON SMART TRENDS IN SYSTEMS, SECURITY AND SUSTAINABILITY (WORLDS4), 2018, : 185 - 192
  • [2] INNES: An intelligent network penetration testing model based on deep reinforcement learning
    Qianyu Li
    Miao Hu
    Hao Hao
    Min Zhang
    Yang Li
    [J]. Applied Intelligence, 2023, 53 : 27110 - 27127
  • [3] INNES: An intelligent network penetration testing model based on deep reinforcement learning
    Li, Qianyu
    Hu, Miao
    Hao, Hao
    Zhang, Min
    Li, Yang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (22) : 27110 - 27127
  • [4] A hierarchical deep reinforcement learning model with expert prior knowledge for intelligent penetration testing
    Li, Qianyu
    Zhang, Min
    Shen, Yi
    Wang, Ruipeng
    Hu, Miao
    Li, Yang
    Hao, Hao
    [J]. COMPUTERS & SECURITY, 2023, 132
  • [5] Automated Penetration Testing Using Deep Reinforcement Learning
    Hu, Zhenguo
    Beuran, Razvan
    Tan, Yasuo
    [J]. 2020 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (EUROS&PW 2020), 2020, : 2 - 10
  • [6] Efficient Penetration Testing Path Planning Based on Reinforcement Learning with Episodic Memory
    Zhou, Ziqiao
    Zhou, Tianyang
    Xu, Jinghao
    Zhu, Junhu
    [J]. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 140 (03): : 2613 - 2634
  • [7] Reinforcement learning in VANET penetration testing*
    Garrad, Phillip
    Unnikrishnan, Saritha
    [J]. RESULTS IN ENGINEERING, 2023, 17
  • [8] UAV path design with connectivity constraint based on deep reinforcement learning
    Yu, Lin
    Wu, Fahui
    Xu, Zhihai
    Xie, Zhigang
    Yang, Dingcheng
    [J]. PHYSICAL COMMUNICATION, 2022, 52
  • [9] Reinforcement Learning for Efficient Network Penetration Testing
    Ghanem, Mohamed C.
    Chen, Thomas M.
    [J]. INFORMATION, 2020, 11 (01)
  • [10] Realizing Midcourse Penetration With Deep Reinforcement Learning
    Jiang, Liang
    Nan, Ying
    Li, Zhi-Han
    [J]. IEEE ACCESS, 2021, 9 : 89812 - 89822