Representative Kernels-Based CNN for Faster Transmission in Federated Learning

被引:0
|
作者
Li, Wei [1 ,2 ]
Shen, Zichen [1 ,2 ]
Liu, Xiulong [3 ]
Wang, Mingfeng [4 ]
Ma, Chao [5 ]
Ding, Chuntao [6 ]
Cao, Jiannong [7 ]
机构
[1] Jiangnan Univ, Res Ctr Intelligent Technol Healthcare, Sch Artificial Intelligence & Comp Sci & Engn, Minist Educ, Wuxi 214126, Jiangsu, Peoples R China
[2] Jiangnan Univ, Jiangsu Key Lab Media Design & Software Technol, Wuxi 214126, Jiangsu, Peoples R China
[3] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
[4] Brunel Univ London, Dept Mech & Aerosp Engn, London UB8 3PH, England
[5] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
[6] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[7] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; convolution neural network; representative kernels; kernel generation function; parameter reduction; module selection; BANDWIDTH;
D O I
10.1109/TMC.2024.3423448
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the contradiction between limited bandwidth and huge transmission parameters, federated Learning (FL) has been an ongoing challenge to reduce the model parameters that need to be transmitted to server in clients for fast transmission. Existing works that attempt to reduce the amount of transmitted parameters have limitations: 1) the reduced number of parameters is not significant; 2) the performance of the global model is limited. In this paper, we propose a novel method called Fed-KGF that significantly reduces the amount of model parameters while improving the global model performance. Our goal is to reduce those transmitted parameters by reducing the number of convolution kernels. Specifically, we construct an incomplete model with a few representative convolution kernels, and propose Kernel Generation Function (KGF) to generate other convolution kernels to render the incomplete model to be a complete one. We discard those generated kernels after training local models, and solely transmit those representative kernels during training, thereby significantly reducing the transmitted parameters. Furthermore, there is a client-drift in the traditional FL because of the averaging method, which hurts the global model performance. We innovatively select one or few modules from all client models in a permutation way, and only aggregate the uploaded modules rather than averaging all modules to reduce client-drift, thus improving the global model performance and further reducing the transmitted parameters. Experimental results on both non-Independent and Identically Distributed (non-IID) and IID scenarios for image classification and object detection tasks demonstrate that our Fed-KGF outperforms SOTA FL models.
引用
收藏
页码:13062 / 13075
页数:14
相关论文
共 50 条
  • [1] Multiple Diseases and Pests Detection Based on Federated Learning and Improved Faster R-CNN
    Deng, Fangming
    Mao, Wei
    Zeng, Ziqi
    Zeng, Han
    Wei, Baoquan
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [2] Subsequence Kernels-Based Arabic Text Classification
    Nehar, Attia
    Benmessaoud, Abdelkader
    Cherroun, Hadda
    Ziadi, Djelloul
    2014 IEEE/ACS 11TH INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS (AICCSA), 2014, : 206 - 213
  • [3] Towards Federated Learning by Kernels
    Shin, Kilho
    Seito, Takenobu
    Liu, Chris
    2024 10TH INTERNATIONAL CONFERENCE ON MECHATRONICS AND ROBOTICS ENGINEERING, ICMRE, 2024, : 317 - 323
  • [4] Faster Adaptive Federated Learning
    Wu, Xidong
    Huang, Feihu
    Hu, Zhengmian
    Huang, Heng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10379 - 10387
  • [5] Faster Convergence on Differential Privacy-Based Federated Learning
    Weng, Shangyin
    Zhang, Lei
    Zhang, Xiaoshuai
    Imran, Muhammad Ali
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 22578 - 22589
  • [6] Personalized Online Federated Learning with Multiple Kernels
    Ghari, Pouya M.
    Shen, Yanning
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [7] Foreign Object Detection of Transmission Lines Based on Faster R-CNN
    Guo, Shuqiang
    Bai, Qianlong
    Zhou, Xinxin
    INFORMATION SCIENCE AND APPLICATIONS, 2020, 621 : 269 - 275
  • [8] A CNN-Based Adaptive Federated Learning Approach for Communication Jamming Recognition
    Zhang, Ningsong
    Li, Yusheng
    Shi, Yuxin
    Shen, Junren
    ELECTRONICS, 2023, 12 (16)
  • [9] Optimization of CNN-based Federated Learning for Cyber-Physical Detection
    Abasi, Ammar Kamal
    Aloqaily, Moayad
    Ouni, Bassem
    Hamdi, Maher
    2023 IEEE 20TH CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2023,
  • [10] Volterra's kernels-based finite-time parameters estimation of the Chua system
    Fedele, Giuseppe
    D'Alfonso, Luigi
    Pin, Gilberto
    Parisini, Thomas
    APPLIED MATHEMATICS AND COMPUTATION, 2018, 318 : 121 - 130