Representative Kernels-Based CNN for Faster Transmission in Federated Learning

被引:0
|
作者
Li, Wei [1 ,2 ]
Shen, Zichen [1 ,2 ]
Liu, Xiulong [3 ]
Wang, Mingfeng [4 ]
Ma, Chao [5 ]
Ding, Chuntao [6 ]
Cao, Jiannong [7 ]
机构
[1] Jiangnan Univ, Res Ctr Intelligent Technol Healthcare, Sch Artificial Intelligence & Comp Sci & Engn, Minist Educ, Wuxi 214126, Jiangsu, Peoples R China
[2] Jiangnan Univ, Jiangsu Key Lab Media Design & Software Technol, Wuxi 214126, Jiangsu, Peoples R China
[3] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
[4] Brunel Univ London, Dept Mech & Aerosp Engn, London UB8 3PH, England
[5] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
[6] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[7] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; convolution neural network; representative kernels; kernel generation function; parameter reduction; module selection; BANDWIDTH;
D O I
10.1109/TMC.2024.3423448
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Due to the contradiction between limited bandwidth and huge transmission parameters, federated Learning (FL) has been an ongoing challenge to reduce the model parameters that need to be transmitted to server in clients for fast transmission. Existing works that attempt to reduce the amount of transmitted parameters have limitations: 1) the reduced number of parameters is not significant; 2) the performance of the global model is limited. In this paper, we propose a novel method called Fed-KGF that significantly reduces the amount of model parameters while improving the global model performance. Our goal is to reduce those transmitted parameters by reducing the number of convolution kernels. Specifically, we construct an incomplete model with a few representative convolution kernels, and propose Kernel Generation Function (KGF) to generate other convolution kernels to render the incomplete model to be a complete one. We discard those generated kernels after training local models, and solely transmit those representative kernels during training, thereby significantly reducing the transmitted parameters. Furthermore, there is a client-drift in the traditional FL because of the averaging method, which hurts the global model performance. We innovatively select one or few modules from all client models in a permutation way, and only aggregate the uploaded modules rather than averaging all modules to reduce client-drift, thus improving the global model performance and further reducing the transmitted parameters. Experimental results on both non-Independent and Identically Distributed (non-IID) and IID scenarios for image classification and object detection tasks demonstrate that our Fed-KGF outperforms SOTA FL models.
引用
收藏
页码:13062 / 13075
页数:14
相关论文
共 50 条
  • [41] Distributed Quantized Transmission and Fusion for Federated Machine Learning
    Kandelusy, Omid Moghimi
    Brinton, Christopher G.
    Kim, Taejoon
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [42] A Secrecy Transmission Protocol with Energy Harvesting for Federated Learning
    Xie, Ping
    Li, Fan
    You, Ilsun
    Xing, Ling
    Wu, Honghai
    Ma, Huahong
    SENSORS, 2022, 22 (15)
  • [43] AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization
    Zhang, Qingsong
    Gu, Bin
    Deng, Cheng
    Gu, Songxiang
    Bo, Liefeng
    Pei, Jian
    Huang, Heng
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 3917 - 3927
  • [44] Faster Non-Convex Federated Learning via Global and Local Momentum
    Das, Rudrajit
    Acharya, Anish
    Hashemi, Abolfazl
    Sanghavi, Sujay
    Dhillon, Inderjit S.
    Topcu, Ufuk
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 496 - 506
  • [45] Detection Method of Insulator Based on Faster R-CNN
    Ma, Lei
    Xu, Changfu
    Zuo, Guoyu
    Bo, Bin
    Tao, Fengbo
    2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 1410 - 1414
  • [46] A Bread Recognition System Based on Faster R-CNN
    Liu, Wen Bin
    Guo, Jia
    Lin, He Zhi
    Huang, Lian Fen
    Gao, Zhi Bin
    Journal of Computers (Taiwan), 2019, 30 (06): : 216 - 222
  • [47] Fabric Defect Detection Based on Faster R-CNN
    Liu, Zhoufeng
    Liu, Xianghui
    Li, Chunlei
    Li, Bicao
    Wang, Baorui
    NINTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2017), 2018, 10615
  • [48] A Fusion Steganographic Algorithm Based on Faster R-CNN
    Meng, Ruohan
    Rice, Steven G.
    Wang, Jin
    Sun, Xingming
    CMC-COMPUTERS MATERIALS & CONTINUA, 2018, 55 (01): : 1 - 16
  • [49] Handwriting Text Recognition Based on Faster R-CNN
    Yang, Junqing
    Ren, Peng
    Kong, Xiaoxiao
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 2450 - 2454
  • [50] Faster R-CNN Based Microscopic Cell Detection
    Yang, Su
    Fang, Bin
    Tang, Wei
    Wu, Xuegang
    Qian, Jiye
    Yang, Weibin
    2017 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2017, : 345 - 350