MCORANFed: Communication Efficient Federated Learning in Open RAN

被引:0
|
作者
Singh, Amardip Kumar [1 ]
Nguyen, Kim Khoa [1 ]
机构
[1] Synchromedia Lab, Ecole Technol Superieure, Montreal, PQ, Canada
关键词
Federated Learning; O-RAN; 5G; Resource Allocation; RAN Intelligent Controller; Network Slicing; RIC;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
To bring network intelligence closer to the end devices, Open Radio Access Networks (O-RAN) specifies a disaggregated and vendor agnostic framework of hierarchical processing units. Although this framework can be useful for certain use cases of 50 smart services, no standardised method to train Machine Learning (ML) models has been defined. Recently. Federated Learning (FL) has emerged as a promising solution for training in disaggregated systems. Unfortunately, the stringent deadline of O-RAN control loops and fluctuating network bandwidth poses challenges for FL Implementation. In this paper, we tackle this problem by proposing an accelerated gradient descent method to expedite the FL convergence, and a compression operator to reduce the communication cost. We formulate a joint optimization problem to select the participating local trainers in each global round of FL and allocate the resources to these trainers while minimizing the overall learning time and resource costs. We design an FL algorithm (MCORANFed) which adheres to the deadline of O-RAN control loops. Extensive experimental results show that MCORANFed outperforms state-of-the-art FL methods such as MFL, FedAvg, and FedProx in terms of its convergence and objective costs.
引用
收藏
页码:15 / 22
页数:8
相关论文
共 50 条
  • [1] Communication Efficient Compressed and Accelerated Federated Learning in Open RAN Intelligent Controllers
    Singh, Amardip Kumar
    Nguyen, Kim Khoa
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (04) : 3361 - 3375
  • [2] Federated learning for efficient spectrum allocation in open RAN
    Asad, Muhammad
    Otoum, Safa
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (08): : 11237 - 11247
  • [3] Federated learning for secure and efficient vehicular communications in open RAN
    Asad, Muhammad
    Shaukat, Saima
    Nakazato, Jin
    Javanmardi, Ehsan
    Tsukada, Manabu
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2025, 28 (03):
  • [4] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [5] Communication and Storage Efficient Federated Split Learning
    Mu, Yujia
    Shen, Cong
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 2976 - 2981
  • [6] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [7] Model Compression for Communication Efficient Federated Learning
    Shah, Suhail Mohmad
    Lau, Vincent K. N.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 5937 - 5951
  • [8] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [9] An Efficient and Secure Federated Learning Communication Framework
    Noura, Hassan
    Hariss, Khalil
    20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024, 2024, : 961 - 968
  • [10] NQFL: Nonuniform Quantization for Communication Efficient Federated Learning
    Chen, Guojun
    Xie, Kaixuan
    Tu, Yuheng
    Song, Tiecheng
    Xu, Yinfei
    Hu, Jing
    Xin, Lun
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (02) : 332 - 336