Smart Security Audit: Reinforcement Learning with a Deep Neural Network Approximator

被引:14
|
作者
Pozdniakov, Konstantin [1 ]
Alonso, Eduardo [1 ]
Stankovic, Vladimir [1 ]
Tam, Kimberly [2 ]
Jones, Kevin [2 ]
机构
[1] City Univ London, London, England
[2] Univ Plymouth, Plymouth, Devon, England
关键词
Pentesting; audit; Q-learning; reinforcement learning; deep neural network; MODEL CHECKING;
D O I
10.1109/cybersa49311.2020.9139683
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A significant challenge in modern computer security is the growing skill gap as intruder capabilities increase, making it necessary to begin automating elements of penetration testing so analysts can contend with the growing number of cyber threats. In this paper, we attempt to assist human analysts by automating a single host penetration attack. To do so, a smart agent performs different attack sequences to find vulnerabilities in a target system. As it does so, it accumulates knowledge, learns new attack sequences and improves its own internal penetration testing logic. As a result, this agent (AgentPen for simplicity) is able to successfully penetrate hosts it has never interacted with before. A computer security administrator using this tool would receive a comprehensive, automated sequence of actions leading to a security breach, highlighting potential vulnerabilities, and reducing the amount of menial tasks a typical penetration tester would need to execute. To achieve autonomy, we apply an unsupervised machine learning algorithm, Q-learning, with an approximator that incorporates a deep neural network architecture. The security audit itself is modelled as a Markov Decision Process in order to test a number of decision-making strategies and compare their convergence to optimality. A series of experimental results is presented to show how this approach can be effectively used to automate penetration testing using a scalable, i.e. not exhaustive, and adaptive approach.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Deep Reinforcement Learning with the Random Neural Network
    Serrano, Will
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 110
  • [2] Incentive-based demand response for smart grid with reinforcement learning and deep neural network
    Lu, Renzhi
    Hong, Seung Ho
    [J]. APPLIED ENERGY, 2019, 236 : 937 - 949
  • [3] Accelerating the Deep Reinforcement Learning with Neural Network Compression
    Zhang, Hongjie
    He, Zhuocheng
    Li, Jing
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [4] Application of deep neural network and deep reinforcement learning in wireless communication
    Li, Ming
    Li, Hui
    [J]. PLOS ONE, 2020, 15 (07):
  • [5] DDNSAS: Deep reinforcement learning based deep Q-learning network for smart agriculture system
    Devarajan, Ganesh Gopal
    Nagarajan, Senthil Murugan
    Ramana, T. V.
    Vignesh, T.
    Ghosh, Uttam
    Alnumay, Waleed
    [J]. SUSTAINABLE COMPUTING-INFORMATICS & SYSTEMS, 2023, 39
  • [6] The Random Neural Network with Deep Learning Clusters in Smart Search
    Serrano, Will
    Gelenbe, Erol
    Yin, Yonghua
    [J]. NEUROCOMPUTING, 2020, 396 : 394 - 405
  • [7] Deep Reinforcement Learning for Cyber Security
    Thanh Thi Nguyen
    Reddi, Vijay Janapa
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 3779 - 3795
  • [8] A CGRA based Neural Network Inference Engine for Deep Reinforcement Learning
    Liang, Minglan
    Chen, Mingsong
    Wang, Zheng
    Sun, Jingwei
    [J]. 2018 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS (APCCAS 2018), 2018, : 540 - 543
  • [9] Deep Reinforcement Learning in Serious Games: Analysis and Design of Deep Neural Network Architectures
    Dobrovsky, Aline
    Wilczak, Cezary W.
    Hahn, Paul
    Hofmann, Marko
    Borghoff, Uwe M.
    [J]. COMPUTER AIDED SYSTEMS THEORY - EUROCAST 2017, PT II, 2018, 10672 : 314 - 321
  • [10] INTELLIGENT PREDICTION OF NETWORK SECURITY SITUATIONS BASED ON DEEP REINFORCEMENT LEARNING ALGORITHM
    Lu, Yan
    Kuang, Yunxin
    Yang, Qiufen
    [J]. SCALABLE COMPUTING-PRACTICE AND EXPERIENCE, 2024, 25 (01): : 147 - 155