CyberRL: Brain-Inspired Reinforcement Learning for Efficient Network Intrusion Detection

被引:0
|
作者
Issa, Mariam Ali [1 ]
Chen, Hanning [1 ]
Wang, Junyao [1 ]
Imani, Mohsen [1 ]
机构
[1] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA
基金
美国国家科学基金会;
关键词
Task analysis; Q-learning; Integrated circuits; Design automation; Computational modeling; Biological neural networks; Computational efficiency; Brain-inspired computing; cybersecurity; hyperdimensional computing (HDC); intrusion detection; reinforcement learning (RL);
D O I
10.1109/TCAD.2024.3418392
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the rapidly evolving landscape of cybersecurity, the risks in securing cloud networks and devices are attesting to be an increasingly prevalent research challenge. Reinforcement learning (RL) is a subfield of machine learning that has demonstrated its ability to detect cyberattacks, as well as its potential to recognize new ones. Many of the popular RL algorithms at present rely on deep neural networks, which are computationally very expensive to train. An alternative approach to this class of algorithms is hyperdimensional computing (HDC), which is a robust, computationally efficient learning paradigm that is ideal for powering resource-constrained devices. In this article, we present CyberRL, a HDC algorithm for learning cybersecurity strategies for intrusion detection in an abstract Markov game environment. We demonstrate that CyberRL is advantageous compared to its deep learning equivalent in computational efficiency, reaching up to 1.9x speedup in training time for multiple devices, including low-powered devices. We also present its enhanced learning quality and superior defense and attack security strategies with up to 12.8x improvement. We implement our framework on Xilinx Alveo U50 FPGA and achieve approximately 700x speedup and energy efficiency improvements compared to the CPU execution.
引用
收藏
页码:241 / 250
页数:10
相关论文
共 50 条
  • [1] Brain-Inspired Agents for Quantum Reinforcement Learning
    Andres, Eva
    Cuellar, Manuel Pegalajar
    Navarro, Gabriel
    MATHEMATICS, 2024, 12 (08)
  • [2] Efficient Off-Policy Reinforcement Learning via Brain-Inspired Computing
    Ni, Yang
    Abraham, Danny
    Issa, Mariam
    Kim, Yeseong
    Mercati, Pietro
    Imani, Mohsen
    PROCEEDINGS OF THE GREAT LAKES SYMPOSIUM ON VLSI 2023, GLSVLSI 2023, 2023, : 449 - 453
  • [3] A Brain-Inspired Incremental Multitask Reinforcement Learning Approach
    Jin, Chenying
    Feng, Xiang
    Yu, Huiqun
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (03) : 1147 - 1160
  • [4] Brain-Inspired Emergence of Behaviors Based on the Desire for Existence by Reinforcement Learning
    Morita, Mikio
    Ishikawa, Masumi
    ADVANCES IN NEURO-INFORMATION PROCESSING, PT I, 2009, 5506 : 763 - 770
  • [5] Brain-Inspired Stigmergy Learning
    Xu, Xing
    Zhao, Zhifeng
    Li, Rongpeng
    Zhang, Honggang
    IEEE ACCESS, 2019, 7 : 54410 - 54424
  • [6] Efficient Brain-Inspired Hyperdimensional Learning with Spatiotemporal Structured Data
    Kim, Jiseung
    Lee, Hyunsei
    Imani, Mohsen
    Kim, Yeseong
    29TH INTERNATIONAL SYMPOSIUM ON THE MODELING, ANALYSIS, AND SIMULATION OF COMPUTER AND TELECOMMUNICATION SYSTEMS (MASCOTS 2021), 2021, : 89 - 96
  • [7] Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
    Neftci, Emre O.
    Pedroni, Bruno U.
    Joshi, Siddharth
    Al-Shedivat, Maruan
    Cauwenberghs, Gert
    FRONTIERS IN NEUROSCIENCE, 2016, 10
  • [8] Brain-inspired filtering Network for small infrared target detection
    Moran, Ju
    Qing, Hu
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (18) : 28405 - 28426
  • [9] Brain-inspired filtering Network for small infrared target detection
    Ju Moran
    Hu Qing
    Multimedia Tools and Applications, 2023, 82 : 28405 - 28426
  • [10] On Brain-inspired Hierarchical Network Topologies
    Beiu, Valeriu
    Madappuram, Basheer A. M.
    Kelly, Peter M.
    McDaid, Liam J.
    2009 9TH IEEE CONFERENCE ON NANOTECHNOLOGY (IEEE-NANO), 2009, : 202 - 205