AsyncFedGAN: An Efficient and Staleness-Aware Asynchronous Federated Learning Framework for Generative Adversarial Networks

被引:0
|
作者
Manu, Daniel [1 ]
Alazzwi, Abee [1 ]
Yao, Jingjing [2 ]
Lin, Youzuo [3 ]
Sun, Xiang [1 ]
机构
[1] Univ New Mexico, Dept Elect & Comp Engn, SECNet Labs, Albuquerque, NM 87131 USA
[2] Texas Tech Univ, Dept Comp Sci, Lubbock, TX 79409 USA
[3] Univ North Carolina Chapel Hill, Sch Data Sci & Soc, Chapel Hill, NC 27599 USA
基金
美国国家科学基金会;
关键词
Training; Computational modeling; Generative adversarial networks; Generators; Data models; Convergence; Load modeling; Data privacy; Adaptation models; Accuracy; Generative adversarial networks (GANs); federated learning; asynchronous; molecular discovery; CLIENT SELECTION; ALLOCATION;
D O I
10.1109/TPDS.2024.3521016
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Generative Adversarial Networks (GANs) are deep learning models that learn and generate new samples similar to existing ones. Traditionally, GANs are trained in centralized data centers, raising data privacy concerns due to the need for clients to upload their data. To address this, Federated Learning (FL) integrates with GANs, allowing collaborative training without sharing local data. However, this integration is complex because GANs involve two interdependent models-the generator and the discriminator-while FL typically handles a single model over distributed datasets. In this article, we propose a novel asynchronous FL framework for GANs, called AsyncFedGAN, designed to efficiently and distributively train both models tailored for molecule generation. AsyncFedGAN addresses the challenges of training interactive models, resolves the straggler issue in synchronous FL, reduces model staleness in asynchronous FL, and lowers client energy consumption. Our extensive simulations for molecular discovery show that AsyncFedGAN achieves convergence with proper settings, outperforms baseline methods, and balances model performance with client energy usage.
引用
收藏
页码:553 / 569
页数:17
相关论文
共 50 条
  • [11] Learning Semantic-aware Normalization for Generative Adversarial Networks
    Zheng, Heliang
    Fu, Jianlong
    Zeng, Yanhong
    Luo, Jiebo
    Zha, Zheng-Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [12] FedMDS: An Efficient Model Discrepancy-Aware Semi-Asynchronous Clustered Federated Learning Framework
    Zhang, Yu
    Liu, Duo
    Duan, Moming
    Li, Li
    Chen, Xianzhang
    Ren, Ao
    Tan, Yujuan
    Wang, Chengliang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (03) : 1007 - 1019
  • [13] Quality Aware Generative Adversarial Networks
    Kancharla, Parimala
    Channappayya, Sumohana S.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [14] A Novel Federated Learning Framework Based on Conditional Generative Adversarial Networks for Privacy Preserving in 6G
    Huang, Jia
    Chen, Zhen
    Liu, Shengzheng
    Long, Haixia
    ELECTRONICS, 2024, 13 (04)
  • [15] Efficient asynchronous federated neuromorphic learning of spiking neural networks
    Wang, Yuan
    Duan, Shukai
    Chen, Feng
    NEUROCOMPUTING, 2023, 557
  • [16] PerFED-GAN: Personalized Federated Learning via Generative Adversarial Networks
    Cao, Xingjian
    Sun, Gang
    Yu, Hongfang
    Guizani, Mohsen
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (05): : 3749 - 3762
  • [17] Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks
    Zhao, Ying
    Chen, Junjun
    Zhang, Jiale
    Wu, Di
    Blumenstein, Michael
    Yu, Shui
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (07):
  • [18] Chain-AAFL: Chained Adversarial-Aware Federated Learning Framework
    Ge, Lina
    He, Xin
    Wang, Guanghui
    Yu, Junyang
    WEB INFORMATION SYSTEMS AND APPLICATIONS (WISA 2021), 2021, 12999 : 237 - 248
  • [19] BAFL: An Efficient Blockchain-Based Asynchronous Federated Learning Framework
    Xu, Chenhao
    Qu, Youyang
    Eklund, Peter W.
    Xiang, Yong
    Gao, Longxiang
    26TH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (IEEE ISCC 2021), 2021,
  • [20] FedACA: An Adaptive Communication-Efficient Asynchronous Framework for Federated Learning
    Zhou, Shuang
    Huo, Yuankai
    Bao, Shunxing
    Landman, Bennett
    Gokhale, Aniruddha
    2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING AND SELF-ORGANIZING SYSTEMS (ACSOS 2022), 2022, : 71 - 80