Communication-Efficient Federated Learning for Large-Scale Multiagent Systems in ISAC: Data Augmentation With Reinforcement Learning

被引:0
|
作者
Ouyang, Wenjiang [1 ]
Liu, Qian [2 ]
Mu, Junsheng [1 ]
AI-Dulaimi, Anwer [3 ]
Jing, Xiaojun [1 ]
Liu, Qilie [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Informat & Commun Engn, Beijing 100876, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Commun & Informat Engn, Chongqing 400065, Peoples R China
[3] EXFO, Res & Dev Dept, Montreal, PQ H4S 0A4, Canada
来源
关键词
Data models; Training; Data augmentation; Integrated sensing and communication; Generative adversarial networks; Federated learning; Data privacy; deep reinforcement learning; federated learning (FL); integrated sensing and communication (ISAC); large-scale multiagent systems (LSMAS); NETWORKING;
D O I
10.1109/JSYST.2024.3450883
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Integrated sensing and communication (ISAC) has attracted great attention with the gains of spectrum efficiency and deployment costs through the coexistence of sensing and communication functions. Meanwhile, federated learning (FL) has great potential to apply to large-scale multiagent systems (LSMAS) in ISAC due to the attractive privacy protection mechanism. Nonindependent identically distribution (non-IID) is a fundamental challenge in FL and seriously affects the convergence performance. To deal with the non-IID issue in FL, a data augmentation optimization algorithm (DAOA) is proposed based on reinforcement learning (RL), where an augmented dataset is generated based on a generative adversarial network (GAN) and the local model parameters are inputted into a deep Q-network (DQN) to learn the optimal number of augmented data. Different from the existing works that only optimize the training performance, the number of augmented data is also considered to improve the sample efficiency in the article. In addition, to alleviate the high-dimensional input challenge in DQN and reduce the communication overhead in FL, a lightweight model is applied to the client based on deep separable convolution (DSC). Simulation results indicate that our proposed DAOA algorithm acquires considerable performance with significantly fewer augmented data, and the communication overhead is reduced greatly compared with benchmark algorithms.
引用
收藏
页码:1893 / 1904
页数:12
相关论文
共 50 条
  • [21] Communication-Efficient Federated Learning for Hybrid VLC/RF Indoor Systems
    Sheikholeslami, Seyed Mohammad
    Rasti-Meymandi, Arash
    Seyed-Mohammadi, Seyed Jamal
    Abouei, Jamshid
    Plataniotis, Konstantinos N.
    IEEE ACCESS, 2022, 10 : 126479 - 126493
  • [22] Communication-Efficient Personalized Federated Learning on Non-IID Data
    Li, Xiangqian
    Ma, Chunmei
    Huang, Baogui
    Li, Guangshun
    2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 562 - 569
  • [23] Communication-Efficient Federated Data Augmentation on Non-IID Data
    Wen, Hui
    Wu, Yue
    Li, Jingjing
    Duan, Hancong
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3376 - 3385
  • [24] HCFL: A High Compression Approach for Communication-Efficient Federated Learning in Very Large Scale IoT Networks
    Nguyen, Minh-Duong
    Lee, Sang-Min
    Pham, Quoc-Viet
    Hoang, Dinh Thai
    Nguyen, Diep N.
    Hwang, Won-Joo
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (11) : 6495 - 6507
  • [25] Hierarchical Mean-Field Deep Reinforcement Learning for Large-Scale Multiagent Systems
    Yu, Chao
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 10, 2023, : 11744 - 11752
  • [26] Federated Learning with Autotuned Communication-Efficient Secure Aggregation
    Bonawitz, Keith
    Salehi, Fariborz
    Konecny, Jakub
    McMahan, Brendan
    Gruteser, Marco
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1222 - 1226
  • [27] ALS Algorithm for Robust and Communication-Efficient Federated Learning
    Hurley, Neil
    Duriakova, Erika
    Geraci, James
    O'Reilly-Morgan, Diarmuid
    Tragos, Elias
    Smyth, Barry
    Lawlor, Aonghus
    PROCEEDINGS OF THE 2024 4TH WORKSHOP ON MACHINE LEARNING AND SYSTEMS, EUROMLSYS 2024, 2024, : 56 - 64
  • [28] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [29] On the Design of Communication-Efficient Federated Learning for Health Monitoring
    Chu, Dong
    Jaafar, Wael
    Yanikomeroglu, Halim
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1128 - 1133
  • [30] Communication-Efficient Design for Quantized Decentralized Federated Learning
    Chen, Li
    Liu, Wei
    Chen, Yunfei
    Wang, Weidong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1175 - 1188