A soft actor-critic reinforcement learning algorithm for network intrusion detection

被引:2
|
作者
Li, Zhengfa [1 ,2 ]
Huang, Chuanhe [1 ,2 ]
Deng, Shuhua [3 ]
Qiu, Wanyu [1 ,2 ]
Gao, Xieping [4 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan, Peoples R China
[2] Hubei LuoJia Lab, Wuhan, Peoples R China
[3] Xiangtan Univ, Key Lab Intelligent Comp & Informat Proc, Minist Educ, Xiangtan, Peoples R China
[4] Hunan Normal Univ, Coll Informat Sci & Engn, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
Network security; Anomaly detection; Network intrusion detection; Deep reinforcement learning; Soft actor-critic;
D O I
10.1016/j.cose.2023.103502
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Network intrusion detection plays a very important role in network security. Although current deep learning-based intrusion detection algorithms have achieved good detection performance, there are still limitations in dealing with unbalanced datasets and identifying minority attacks and unknown attacks. In this paper, we propose an intrusion detection model AE-SAC based on adversarial environment learning and soft actor-critic reinforcement learning algorithm. First, this paper introduces an environmental agent for training data resampling to solve the imbalance problem of the original data. Second, rewards are redefined in reinforcement learning. In order to improve the recognition rate of few categories of network attacks, we set different reward values for different categories of attacks. The environment agent and classifier agent are trained adversarially around maximizing their respective reward values. Finally, a multi-classification experiment is conducted on the NSL-KDD and AWID datasets to compare with the existed excellent intrusion detection algorithms. AE-SAC achieves excellent classification performance with an accuracy of 84.15% and a f1-score of 83.97% on the NSL-KDD dataset, and an accuracy and a f1-score over 98.9% on the AWID dataset.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] A Sandpile Model for Reliable Actor-Critic Reinforcement Learning
    Peng, Yiming
    Chen, Gang
    Zhang, Mengjie
    Pang, Shaoning
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 4014 - 4021
  • [32] Actor-critic reinforcement learning for bidding in bilateral negotiation
    Arslan, Furkan
    Aydogan, Reyhan
    [J]. TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2022, 30 (05) : 1695 - 1714
  • [33] Actor-Critic Reinforcement Learning for Tracking Control in Robotics
    Pane, Yudha P.
    Nageshrao, Subramanya P.
    Babuska, Robert
    [J]. 2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 5819 - 5826
  • [34] Visual Navigation with Actor-Critic Deep Reinforcement Learning
    Shao, Kun
    Zhao, Dongbin
    Zhu, Yuanheng
    Zhang, Qichao
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [35] A bounded actor-critic reinforcement learning algorithm applied to airline revenue management
    Lawhead, Ryan J.
    Gosavi, Abhijit
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2019, 82 : 252 - 262
  • [36] Reinforcement learning with actor-critic for knowledge graph reasoning
    Linli ZHANG
    Dewei LI
    Yugeng XI
    Shuai JIA
    [J]. Science China(Information Sciences), 2020, 63 (06) : 223 - 225
  • [37] Hybrid actor-critic algorithm for quantum reinforcement learning at CERN beam lines
    Schenk, Michael
    Combarro, Elias F.
    Grossi, Michele
    Kain, Verena
    Li, Kevin Shing Bruce
    Popa, Mircea-Marian
    Vallecorsa, Sofia
    [J]. QUANTUM SCIENCE AND TECHNOLOGY, 2024, 9 (02)
  • [38] Actor-Critic Reinforcement Learning for Control With Stability Guarantee
    Han, Minghao
    Zhang, Lixian
    Wang, Jun
    Pan, Wei
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6217 - 6224
  • [39] Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning
    Wu, Yue
    Zhai, Shuangfei
    Srivastava, Nitish
    Susskind, Joshua
    Zhang, Jian
    Salakhutdinov, Ruslan
    Goh, Hanlin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [40] MARS: Malleable Actor-Critic Reinforcement Learning Scheduler
    Baheri, Betis
    Tronge, Jacob
    Fang, Bo
    Li, Ang
    Chaudhary, Vipin
    Guan, Qiang
    [J]. 2022 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE, IPCCC, 2022,