Representative Kernels-Based CNN for Faster Transmission in Federated Learning

被引:0
|
作者
Li, Wei [1 ,2 ]
Shen, Zichen [1 ,2 ]
Liu, Xiulong [3 ]
Wang, Mingfeng [4 ]
Ma, Chao [5 ]
Ding, Chuntao [6 ]
Cao, Jiannong [7 ]
机构
[1] Jiangnan Univ, Res Ctr Intelligent Technol Healthcare, Sch Artificial Intelligence & Comp Sci & Engn, Minist Educ, Wuxi 214126, Jiangsu, Peoples R China
[2] Jiangnan Univ, Jiangsu Key Lab Media Design & Software Technol, Wuxi 214126, Jiangsu, Peoples R China
[3] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
[4] Brunel Univ London, Dept Mech & Aerosp Engn, London UB8 3PH, England
[5] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
[6] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[7] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; convolution neural network; representative kernels; kernel generation function; parameter reduction; module selection; BANDWIDTH;
D O I
10.1109/TMC.2024.3423448
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the contradiction between limited bandwidth and huge transmission parameters, federated Learning (FL) has been an ongoing challenge to reduce the model parameters that need to be transmitted to server in clients for fast transmission. Existing works that attempt to reduce the amount of transmitted parameters have limitations: 1) the reduced number of parameters is not significant; 2) the performance of the global model is limited. In this paper, we propose a novel method called Fed-KGF that significantly reduces the amount of model parameters while improving the global model performance. Our goal is to reduce those transmitted parameters by reducing the number of convolution kernels. Specifically, we construct an incomplete model with a few representative convolution kernels, and propose Kernel Generation Function (KGF) to generate other convolution kernels to render the incomplete model to be a complete one. We discard those generated kernels after training local models, and solely transmit those representative kernels during training, thereby significantly reducing the transmitted parameters. Furthermore, there is a client-drift in the traditional FL because of the averaging method, which hurts the global model performance. We innovatively select one or few modules from all client models in a permutation way, and only aggregate the uploaded modules rather than averaging all modules to reduce client-drift, thus improving the global model performance and further reducing the transmitted parameters. Experimental results on both non-Independent and Identically Distributed (non-IID) and IID scenarios for image classification and object detection tasks demonstrate that our Fed-KGF outperforms SOTA FL models.
引用
收藏
页码:13062 / 13075
页数:14
相关论文
共 50 条
  • [21] Faster-R-CNN based deep learning for locating corn tassels in UAV imagery
    Al-Zadjali, Aziza
    Shi, Yeyin
    Scott, Stephen
    Deogun, Jitender S.
    Schnable, James
    AUTONOMOUS AIR AND GROUND SENSING SYSTEMS FOR AGRICULTURAL OPTIMIZATION AND PHENOTYPING V, 2020, 11414
  • [22] Classification of Shellfish Recognition Based on Improved Faster R-CNN Framework of Deep Learning
    Feng, Yiran
    Tao, Xueheng
    Lee, Eung-Joo
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2021, 2021
  • [23] TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels
    Yu, Yaodong
    Wei, Alexander
    Karimireddy, Sai Praneeth
    Ma, Yi
    Jordan, Michael I.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [24] Unequal error protection transmission for federated learning
    Zheng, Sihui
    Chen, Xiang
    IET COMMUNICATIONS, 2022, 16 (10) : 1106 - 1118
  • [25] Layer-Wise Adaptive Weighting for Faster Convergence in Federated Learning
    Lanjewar, Vedant S.
    Tran, Hai-Anh
    Tran, Truong X.
    2024 IEEE INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE, IRI 2024, 2024, : 126 - 131
  • [26] A Novel Transmission Line Defect Detection Method Based on Adaptive Federated Learning
    Deng, Fangming
    Zeng, Ziqi
    Mao, Wei
    Wei, Baoquan
    Li, Zewen
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [27] Optimization Algorithm for Data Transmission in the Vehicular Networking Based on Federated Edge Learning
    Chen, Xuanjin
    Ni, Zhengwei
    2024 5TH INTERNATIONAL CONFERENCE ON MECHATRONICS TECHNOLOGY AND INTELLIGENT MANUFACTURING, ICMTIM 2024, 2024, : 778 - 786
  • [28] Faster Rates for Compressed Federated Learning with Client-Variance Reduction
    Zhao, Haoyu
    Burlachenko, Konstantin
    Li, Zhize
    Richtarik, Peter
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2024, 6 (01): : 154 - 175
  • [29] CAPTCHA Recognition Based on Faster R-CNN
    Du, Feng-Lin
    Li, Jia-Xing
    Yang, Zhi
    Chen, Peng
    Wang, Bing
    Zhang, Jun
    INTELLIGENT COMPUTING THEORIES AND APPLICATION, ICIC 2017, PT II, 2017, 10362 : 597 - 605
  • [30] Butterfly Recognition Based on Faster R-CNN
    Zhao, Ruoyan
    Li, Cuixia
    Ye, Shuai
    Fang, Xinru
    2018 INTERNATIONAL SEMINAR ON COMPUTER SCIENCE AND ENGINEERING TECHNOLOGY (SCSET 2018), 2019, 1176