AsyncFedGAN: An Efficient and Staleness-Aware Asynchronous Federated Learning Framework for Generative Adversarial Networks

被引:0
|
作者
Manu, Daniel [1 ]
Alazzwi, Abee [1 ]
Yao, Jingjing [2 ]
Lin, Youzuo [3 ]
Sun, Xiang [1 ]
机构
[1] Univ New Mexico, Dept Elect & Comp Engn, SECNet Labs, Albuquerque, NM 87131 USA
[2] Texas Tech Univ, Dept Comp Sci, Lubbock, TX 79409 USA
[3] Univ North Carolina Chapel Hill, Sch Data Sci & Soc, Chapel Hill, NC 27599 USA
基金
美国国家科学基金会;
关键词
Training; Computational modeling; Generative adversarial networks; Generators; Data models; Convergence; Load modeling; Data privacy; Adaptation models; Accuracy; Generative adversarial networks (GANs); federated learning; asynchronous; molecular discovery; CLIENT SELECTION; ALLOCATION;
D O I
10.1109/TPDS.2024.3521016
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Generative Adversarial Networks (GANs) are deep learning models that learn and generate new samples similar to existing ones. Traditionally, GANs are trained in centralized data centers, raising data privacy concerns due to the need for clients to upload their data. To address this, Federated Learning (FL) integrates with GANs, allowing collaborative training without sharing local data. However, this integration is complex because GANs involve two interdependent models-the generator and the discriminator-while FL typically handles a single model over distributed datasets. In this article, we propose a novel asynchronous FL framework for GANs, called AsyncFedGAN, designed to efficiently and distributively train both models tailored for molecule generation. AsyncFedGAN addresses the challenges of training interactive models, resolves the straggler issue in synchronous FL, reduces model staleness in asynchronous FL, and lowers client energy consumption. Our extensive simulations for molecular discovery show that AsyncFedGAN achieves convergence with proper settings, outperforms baseline methods, and balances model performance with client energy usage.
引用
收藏
页码:553 / 569
页数:17
相关论文
共 50 条
  • [1] FedASMU: Efficient Asynchronous Federated Learning with Dynamic Staleness-Aware Model Update
    Liu, Ji
    Jia, Juncheng
    Che, Tianshi
    Huo, Chao
    Ren, Jiaxiang
    Zhou, Yang
    Dai, Huaiyu
    Dou, Dejing
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13900 - 13908
  • [2] ASMAFL: Adaptive Staleness-Aware Momentum Asynchronous Federated Learning in Edge Computing
    Qiao, Dewen
    Guo, Songtao
    Zhao, Jun
    Le, Junqing
    Zhou, Pengzhan
    Li, Mingyan
    Chen, Xuetao
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (04) : 3390 - 3406
  • [3] FedSA: A staleness-aware asynchronous Federated Learning algorithm with non-IID data
    Chen, Ming
    Mao, Bingcheng
    Ma, Tianyi
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 120 : 1 - 12
  • [4] Staleness aware semi-asynchronous federated learning
    Yu, Miri
    Choi, Jiheon
    Lee, Jaehyun
    Oh, Sangyoon
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2024, 93
  • [5] Client Selection With Staleness Compensation in Asynchronous Federated Learning
    Zhu, Hongbin
    Kuang, Junqian
    Yang, Miao
    Qian, Hua
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (03) : 4124 - 4129
  • [6] A Novel Federated Learning Scheme for Generative Adversarial Networks
    Zhang, Jiaxin
    Zhao, Liang
    Yu, Keping
    Min, Geyong
    Al-Dubai, Ahmed Y.
    Zomaya, Albert Y.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 3633 - 3649
  • [7] Federated transfer learning for auxiliary classifier generative adversarial networks: framework and industrial application
    Guo, Wei
    Wang, Yijin
    Chen, Xin
    Jiang, Pingyu
    JOURNAL OF INTELLIGENT MANUFACTURING, 2024, 35 (04) : 1439 - 1454
  • [8] Federated transfer learning for auxiliary classifier generative adversarial networks: framework and industrial application
    Wei Guo
    Yijin Wang
    Xin Chen
    Pingyu Jiang
    Journal of Intelligent Manufacturing, 2024, 35 : 1439 - 1454
  • [9] Staleness-Controlled Asynchronous Federated Learning: Accuracy and Efficiency Tradeoff
    Sun, Sheng
    Zhang, Zengqi
    Pan, Quyang
    Liu, Min
    Wang, Yuwei
    He, Tianliu
    Chen, Yali
    Wu, Zhiyuan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 12621 - 12634
  • [10] Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks
    Sun, Yuwei
    Chong, Ng S. T.
    Ochiai, Hideya
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 2749 - 2754