Communication-Efficient Federated Learning for Large-Scale Multiagent Systems in ISAC: Data Augmentation With Reinforcement Learning

被引:0
|
作者
Ouyang, Wenjiang [1 ]
Liu, Qian [2 ]
Mu, Junsheng [1 ]
AI-Dulaimi, Anwer [3 ]
Jing, Xiaojun [1 ]
Liu, Qilie [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Informat & Commun Engn, Beijing 100876, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China
[3] EXFO, Res & Dev Dept, Montreal, PQ H4S 0A4, Canada
来源
关键词
Data models; Training; Data augmentation; Integrated sensing and communication; Generative adversarial networks; Federated learning; Data privacy; deep reinforcement learning; federated learning (FL); integrated sensing and communication (ISAC); large-scale multiagent systems (LSMAS); NETWORKING;
D O I
10.1109/JSYST.2024.3450883
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Integrated sensing and communication (ISAC) has attracted great attention with the gains of spectrum efficiency and deployment costs through the coexistence of sensing and communication functions. Meanwhile, federated learning (FL) has great potential to apply to large-scale multiagent systems (LSMAS) in ISAC due to the attractive privacy protection mechanism. Nonindependent identically distribution (non-IID) is a fundamental challenge in FL and seriously affects the convergence performance. To deal with the non-IID issue in FL, a data augmentation optimization algorithm (DAOA) is proposed based on reinforcement learning (RL), where an augmented dataset is generated based on a generative adversarial network (GAN) and the local model parameters are inputted into a deep Q-network (DQN) to learn the optimal number of augmented data. Different from the existing works that only optimize the training performance, the number of augmented data is also considered to improve the sample efficiency in the article. In addition, to alleviate the high-dimensional input challenge in DQN and reduce the communication overhead in FL, a lightweight model is applied to the client based on deep separable convolution (DSC). Simulation results indicate that our proposed DAOA algorithm acquires considerable performance with significantly fewer augmented data, and the communication overhead is reduced greatly compared with benchmark algorithms.
引用
收藏
页码:1893 / 1904
页数:12
相关论文
共 50 条
  • [1] Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity
    Zhou, Xingcai
    Yang, Guang
    INFORMATION SCIENCES, 2024, 661
  • [2] Communication-Efficient Consensus Mechanism for Federated Reinforcement Learning
    Xu, Xing
    Li, Rongpeng
    Zhao, Zhifeng
    Zhang, Honggang
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 80 - 85
  • [3] Communication-Efficient Personalized Federated Edge Learning for Decentralized Sensing in ISAC
    Zhu, Yonghui
    Zhang, Ronghui
    Cui, Yuanhao
    Wu, Sheng
    Jiang, Chunxiao
    Jing, Xiaojun
    2023 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS, 2023, : 207 - 212
  • [4] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [5] Communication-Efficient and Federated Multi-Agent Reinforcement Learning
    Krouka, Mounssif
    Elgabli, Anis
    Ben Issaid, Chaouki
    Bennis, Mehdi
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (01) : 311 - 320
  • [6] Communication-Efficient Federated Learning For Massive MIMO Systems
    Mu, Yuchen
    Garg, Navneet
    Ratnarajah, Tharmalingam
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 578 - 583
  • [7] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [8] FedQMIX: Communication-efficient federated learning via multi-agent reinforcement learning
    Cao, Shaohua
    Zhang, Hanqing
    Wen, Tian
    Zhao, Hongwei
    Zheng, Quancheng
    Zhang, Weishan
    Zheng, Danyang
    HIGH-CONFIDENCE COMPUTING, 2024, 4 (02):
  • [9] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [10] Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data
    Abdellatif, Alaa Awad
    Mhaisen, Naram
    Mohamed, Amr
    Erbad, Aiman
    Guizani, Mohsen
    Dawy, Zaher
    Nasreddine, Wassim
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 128 : 406 - 419