Constrained Deep Reinforcement Learning for Fronthaul Compression Optimization

被引:0
|
作者
Gronland, Axel [1 ,2 ]
Russo, Alessio [1 ]
Jedra, Yassir [1 ]
Klaiqi, Bleron [2 ]
Gelabert, Xavier [2 ]
机构
[1] Royal Inst Technol KTH, Stockholm, Sweden
[2] Stockholm Res Ctr, Huawei Technol Sweden AB, Stockholm, Sweden
关键词
C-RAN; fronthaul; machine learning; reinforcement learning; performance evaluation;
D O I
10.1109/ICMLCN59089.2024.10624764
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the Centralized-Radio Access Network (C-RAN) architecture, functions can be placed in the central or distributed locations. This architecture can offer higher capacity and cost savings but also puts strict requirements on the fronthaul (FH), these constraints can be any number of constraints but in this work we consider a constraint on packet loss and latency. Adaptive FH compression schemes that adapt the compression amount to varying FH traffic are promising approaches to deal with stringent FH requirements. In this work, we design such a compression scheme using a model-free off policy deep reinforcement learning algorithm which accounts for FH latency and packet loss constraints. Furthermore, this algorithm is designed for model transparency and interpretability which is crucial for AI trustworthiness in performance critical domains. We show that our algorithm can successfully choose an appropriate compression scheme while satisfying the constraints and exhibits a roughly 70% increase in FH utilization compared to a reference scheme.
引用
收藏
页码:498 / 504
页数:7
相关论文
共 50 条
  • [21] Accelerating the Deep Reinforcement Learning with Neural Network Compression
    Zhang, Hongjie
    He, Zhuocheng
    Li, Jing
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [22] Enhanced Bayesian Compression via Deep Reinforcement Learning
    Yuan, Xin
    Ren, Liangliang
    Lu, Jiwen
    Zhou, Jie
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6939 - 6948
  • [23] Model Parallelism optimization with deep reinforcement learning
    Mirhoseini, Azalia
    2018 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW 2018), 2018, : 855 - 855
  • [24] FPGA Placement Optimization with Deep Reinforcement Learning
    Zhang, Junpeng
    Deng, Fang
    Yang, Xudong
    2021 2ND INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND INTELLIGENT CONTROL (ICCEIC 2021), 2021, : 73 - 76
  • [25] Optimization of Molecules via Deep Reinforcement Learning
    Zhenpeng Zhou
    Steven Kearnes
    Li Li
    Richard N. Zare
    Patrick Riley
    Scientific Reports, 9
  • [26] Deep Reinforcement Learning for Traffic Light Optimization
    Coskun, Mustafa
    Baggag, Abdelkader
    Chawla, Sanjay
    2018 18TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2018, : 564 - 571
  • [27] Deep Reinforcement Learning for RAN Optimization and Control
    Chen, Yu
    Chen, Jie
    Krishnamurthi, Ganesh
    Yang, Huijing
    Wang, Huahui
    Zhao, Wenjie
    2021 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2021,
  • [28] Optimization of Molecules via Deep Reinforcement Learning
    Zhou, Zhenpeng
    Kearnes, Steven
    Li, Li
    Zare, Richard N.
    Riley, Patrick
    SCIENTIFIC REPORTS, 2019, 9 (1)
  • [29] Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm
    Jayant, Ashish Kumar
    Bhatnagar, Shalabh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [30] Deep reinforcement learning assisted surrogate model management for expensive constrained multi-objective optimization
    Shao, Shuai
    Tian, Ye
    Zhang, Yajie
    SWARM AND EVOLUTIONARY COMPUTATION, 2025, 92