CyberRL: Brain-Inspired Reinforcement Learning for Efficient Network Intrusion Detection

被引:0
|
作者
Issa, Mariam Ali [1 ]
Chen, Hanning [1 ]
Wang, Junyao [1 ]
Imani, Mohsen [1 ]
机构
[1] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
基金
美国国家科学基金会;
关键词
Task analysis; Q-learning; Integrated circuits; Design automation; Computational modeling; Biological neural networks; Computational efficiency; Brain-inspired computing; cybersecurity; hyperdimensional computing (HDC); intrusion detection; reinforcement learning (RL);
D O I
10.1109/TCAD.2024.3418392
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the rapidly evolving landscape of cybersecurity, the risks in securing cloud networks and devices are attesting to be an increasingly prevalent research challenge. Reinforcement learning (RL) is a subfield of machine learning that has demonstrated its ability to detect cyberattacks, as well as its potential to recognize new ones. Many of the popular RL algorithms at present rely on deep neural networks, which are computationally very expensive to train. An alternative approach to this class of algorithms is hyperdimensional computing (HDC), which is a robust, computationally efficient learning paradigm that is ideal for powering resource-constrained devices. In this article, we present CyberRL, a HDC algorithm for learning cybersecurity strategies for intrusion detection in an abstract Markov game environment. We demonstrate that CyberRL is advantageous compared to its deep learning equivalent in computational efficiency, reaching up to 1.9x speedup in training time for multiple devices, including low-powered devices. We also present its enhanced learning quality and superior defense and attack security strategies with up to 12.8x improvement. We implement our framework on Xilinx Alveo U50 FPGA and achieve approximately 700x speedup and energy efficiency improvements compared to the CPU execution.
引用
收藏
页码:241 / 250
页数:10
相关论文
共 50 条
  • [41] Editorial for Special Issue on Brain-inspired Machine Learning
    Zhao-Xiang Zhang
    Bin Luo
    Jin Tang
    Shan Yu
    Amir Hussain
    Machine Intelligence Research, 2022, 19 : 347 - 349
  • [42] Learning Algorithms and Signal Processing for Brain-Inspired Computing
    Simeone, Osvaldo
    Rajendran, Bipin
    Gruning, Andre
    Eleftheriou, Evangelos S.
    Davies, Mike
    Deneve, Sophie
    Huang, Guang-Bin
    IEEE SIGNAL PROCESSING MAGAZINE, 2019, 36 (06) : 12 - 15
  • [43] Brain-inspired Continuous Learning: Technology, Application and Future
    Yang Jing
    Li Bin
    Li Shaobo
    Wang Qi
    Yu Liya
    Hu Jianjun
    Yuan Kun
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (05) : 1865 - 1878
  • [44] Editorial for Special Issue on Brain-inspired Machine Learning
    Zhao-Xiang Zhang
    Bin Luo
    Jin Tang
    Shan Yu
    Amir Hussain
    Machine Intelligence Research , 2022, (05) : 347 - 349
  • [45] Editorial for Special Issue on Brain-inspired Machine Learning
    Zhang, Zhao-Xiang
    Luo, Bin
    Tang, Jin
    Yu, Shan
    Hussain, Amir
    MACHINE INTELLIGENCE RESEARCH, 2022, 19 (05) : 347 - 349
  • [46] Brain-Inspired Learning, Perception, and Cognition: A Comprehensive Review
    Jiao, Licheng
    Ma, Mengru
    He, Pei
    Geng, Xueli
    Liu, Xu
    Liu, Fang
    Ma, Wenping
    Yang, Shuyuan
    Hou, Biao
    Tang, Xu
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 21
  • [47] Brain-Inspired Learning Model for EEG Diagnosis of Depression
    Zeng, Haochen
    Hu, Bin
    Guan, Zhihong
    Computer Engineering and Applications, 2024, 60 (03) : 157 - 164
  • [48] Brain-inspired automated visual object discovery and detection
    Chen, Lichao
    Singh, Sudhir
    Kailath, Thomas
    Roychowdhury, Vwani
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (01) : 96 - 105
  • [49] Toward a Brain-Inspired System: Deep Recurrent Reinforcement Learning for a Simulated Self-Driving Agent
    Chen, Jieneng
    Chen, Jingye
    Zhang, Ruiming
    Hu, Xiaobin
    FRONTIERS IN NEUROROBOTICS, 2019, 13
  • [50] Brain-Inspired Experience Reinforcement Model for Bin Packing in Varying Environments
    Zhang, Linli
    Li, Dewei
    Jia, Shuai
    Shao, Haibin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (05) : 2168 - 2180