Dynamic Scheduling of Cybersecurity Analysts for Minimizing Risk Using Reinforcement Learning

被引:38
|
作者
Ganesan, Rajesh [1 ]
Jajodia, Sushil [2 ]
Shah, Ankit [2 ]
Cam, Hasan [3 ]
机构
[1] George Mason Univ, Dept Syst Engn & Operat Res, Mail Stop 4A6, Fairfax, VA 22030 USA
[2] George Mason Univ, Ctr Secure Informat Syst, Mail Stop 5B5, Fairfax, VA 22030 USA
[3] Army Res Lab, 2800 Powder Mill Rd, Adelphi, MD 20783 USA
基金
美国国家科学基金会;
关键词
Cybersecurity; Analysts; Dynamic Scheduling; Cybersecurity analysts; dynamic scheduling; genetic algorithm; integer programming; optimization; reinforcement learning; resource allocation; risk mitigation;
D O I
10.1145/2882969
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An important component of the cyber-defense mechanism is the adequate staffing levels of its cybersecurity analyst workforce and their optimal assignment to sensors for investigating the dynamic alert traffic. The ever-increasing cybersecurity threats faced by today's digital systems require a strong cyber-defense mechanism that is both reactive in its response to mitigate the known risk and proactive in being prepared for handling the unknown risks. In order to be proactive for handling the unknown risks, the above workforce must be scheduled dynamically so the system is adaptive to meet the day-to-day stochastic demands on its workforce (both size and expertise mix). The stochastic demands on the workforce stem from the varying alert generation and their significance rate, which causes an uncertainty for the cybersecurity analyst scheduler that is attempting to schedule analysts for work and allocate sensors to analysts. Sensor data are analyzed by automatic processing systems, and alerts are generated. A portion of these alerts is categorized to be significant, which requires thorough examination by a cybersecurity analyst. Risk, in this article, is defined as the percentage of significant alerts that are not thoroughly analyzed by analysts. In order to minimize risk, it is imperative that the cyber-defense system accurately estimates the future significant alert generation rate and dynamically schedules its workforce to meet the stochastic workload demand to analyze them. The article presents a reinforcement learning-based stochastic dynamic programming optimization model that incorporates the above estimates of future alert rates and responds by dynamically scheduling cybersecurity analysts to minimize risk (i.e., maximize significant alert coverage by analysts) and maintain the risk under a pre-determined upper bound. The article tests the dynamic optimization model and compares the results to an integer programming model that optimizes the static staffing needs based on a daily-average alert generation rate with no estimation of future alert rates (static workforce model). Results indicate that over a finite planning horizon, the learning-based optimization model, through a dynamic (on-call) workforce in addition to the static workforce, (a) is capable of balancing risk between days and reducing overall risk better than the static model, (b) is scalable and capable of identifying the quantity and the right mix of analyst expertise in an organization, and (c) is able to determine their dynamic (on-call) schedule and their sensor-to-analyst allocation in order to maintain risk below a given upper bound. Several meta-principles are presented, which are derived from the optimization model, and they further serve as guiding principles for hiring and scheduling cybersecurity analysts. Days-off scheduling was performed to determine analyst weekly work schedules that met the cybersecurity system's workforce constraints and requirements.
引用
收藏
页码:1 / 21
页数:21
相关论文
共 50 条
  • [1] Optimal Scheduling of Cybersecurity Analysts for Minimizing Risk
    Ganesan, Rajesh
    Jajodia, Sushil
    Cam, Hasan
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2017, 8 (04)
  • [2] Dynamic Scheduling of the Dual Stocker System Using Reinforcement Learning
    Hwang, Seol
    Hong, Sang Pyo
    Jang, Young Jae
    [J]. ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: PRODUCTION MANAGEMENT FOR DATA-DRIVEN, INTELLIGENT, COLLABORATIVE, AND SUSTAINABLE MANUFACTURING, APMS 2018, 2018, 535 : 482 - 489
  • [3] Dynamic Scheduling in a Flow Shop Using Deep Reinforcement Learning
    Marchesano, Maria Grazia
    Guizzi, Guido
    Santillo, Liberatina Carmela
    Vespoli, Silvestro
    [J]. ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT I, 2021, 630 : 152 - 160
  • [4] Pattern Driven Dynamic Scheduling Approach using Reinforcement Learning
    Wei Yingzi
    Jiang Xinli
    Hao Pingbo
    Gu Kanfeng
    [J]. 2009 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS ( ICAL 2009), VOLS 1-3, 2009, : 514 - +
  • [5] Dynamic job-shop scheduling using reinforcement learning agents
    Aydin, ME
    Öztemel, E
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2000, 33 (2-3) : 169 - 178
  • [6] Decentralized dynamic workflow scheduling for grid computing using reinforcement learning
    Yao, Jianxin
    Tham, Chen-Khong
    Ng, Kah-Yong
    [J]. ICON: 2006 IEEE INTERNATIONAL CONFERENCE ON NETWORKS, VOLS 1 AND 2, PROCEEDINGS: NETWORKING -CHALLENGES AND FRONTIERS, 2006, : 90 - +
  • [7] Reinforcement Learning in Dynamic Task Scheduling: A Review
    Shyalika C.
    Silva T.
    Karunananda A.
    [J]. SN Computer Science, 2020, 1 (6)
  • [8] Dynamic scheduling for flexible job shop using a deep reinforcement learning approach
    Gui, Yong
    Tang, Dunbing
    Zhu, Haihua
    Zhang, Yi
    Zhang, Zequn
    [J]. COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 180
  • [9] Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
    Zhang, Zixiao
    He, Fujun
    Oki, Eiji
    [J]. IEICE TRANSACTIONS ON COMMUNICATIONS, 2023, E106B (07) : 557 - 570
  • [10] Quantum trajectories with dynamic loop scheduling and reinforcement learning
    Carino, Ricolindo L.
    Banicescu, Ioana
    Pabico, Jaderick P.
    Rashid, Mahbubur
    [J]. 2005 IEEE International Conference on Cluster Computing (CLUSTER), 2006, : 480 - 481