Attention-based Open RAN Slice Management using Deep Reinforcement Learning

被引:1
|
作者
Lotfi, Fatemeh [1 ]
Afghah, Fatemeh [1 ]
Ashdown, Jonathan [2 ]
机构
[1] Clemson Univ, Holcombe Dept Elect & Comp Engn, Clemson, SC 29634 USA
[2] Air Force Res Lab, Rome, NY 13441 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/GLOBECOM54140.2023.10436850
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As emerging networks such as Open Radio Access Networks (O-RAN) and 5G continue to grow, the demand for various services with different requirements is increasing. Network slicing has emerged as a potential solution to address the different service requirements. However, managing network slices while maintaining quality of services (QoS) in dynamic environments is a challenging task. Utilizing machine learning (ML) approaches for optimal control of dynamic networks can enhance network performance by preventing Service Level Agreement (SLA) violations. This is critical for dependable decision-making and satisfying the needs of emerging networks. Although RL-based control methods are effective for real-time monitoring and controlling network QoS, generalization is necessary to improve decision-making reliability. This paper introduces an innovative attention-based deep RL (ADRL) technique that leverages the O-RAN disaggregated modules and distributed agent cooperation to achieve better performance through effective information extraction and implementing generalization. The proposed method introduces a value-attention network between distributed agents to enable reliable and optimal decision-making. Simulation results demonstrate significant improvements in network performance compared to other DRL baseline methods.
引用
收藏
页码:6328 / 6333
页数:6
相关论文
共 50 条
  • [31] Universal Vertical Applications Adaptation for Open RAN: A Deep Reinforcement Learning Approach
    Huang, Yi-Cheng
    Lien, Shao-Yu
    Tseng, Chih-Cheng
    Deng, Der-Jiunn
    Chen, Kwang-Cheng
    [J]. 2022 25TH INTERNATIONAL SYMPOSIUM ON WIRELESS PERSONAL MULTIMEDIA COMMUNICATIONS (WPMC), 2022,
  • [32] Attention-based Deep Learning for Network Intrusion Detection
    Guo, Naiwang
    Tian, Yingjie
    Li, Fan
    Yang, Hongshan
    [J]. 2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [33] Heterogeneous mission planning for a single unmanned aerial vehicle (UAV) with attention-based deep reinforcement learning
    Jung, Minjae
    Oh, Hyondong
    [J]. PEERJ COMPUTER SCIENCE, 2022, 8
  • [34] Temporal learning in predictive health management using channel-spatial attention-based deep neural networks
    Liu, Chien-Liang
    Su, Huan-Ci
    [J]. ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [35] A human-like collision avoidance method for autonomous ship with attention-based deep reinforcement learning
    Jiang, Lingling
    An, Lanxuan
    Zhang, Xinyu
    Wang, Chengbo
    Wang, Xinjian
    [J]. OCEAN ENGINEERING, 2022, 264
  • [36] Attention-based learning
    Kasderidis, S
    Taylor, JG
    [J]. 2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 525 - 530
  • [37] A Slice Admission Policy Based on Reinforcement Learning for a 5G Flexible RAN
    Raza, M. R.
    Natalino, C.
    Ohlen, P.
    Wosinska, L.
    Monti, P.
    [J]. 2018 EUROPEAN CONFERENCE ON OPTICAL COMMUNICATION (ECOC), 2018,
  • [38] Deep Reinforcement Learning for RAN Optimization and Control
    Chen, Yu
    Chen, Jie
    Krishnamurthi, Ganesh
    Yang, Huijing
    Wang, Huahui
    Zhao, Wenjie
    [J]. 2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
  • [39] Multi-Task Reinforcement Learning With Attention-Based Mixture of Experts
    Cheng, Guangran
    Dong, Lu
    Cai, Wenzhe
    Sun, Changyin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06) : 3811 - 3818
  • [40] Attention-Based Distributional Reinforcement Learning for Safe and Efficient Autonomous Driving
    Liu, Jia
    Yin, Jianwen
    Jiang, Zhengmin
    Liang, Qingyi
    Li, Huiyun
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7477 - 7484