Model Sparsification for Communication-Efficient Multi-Party Learning via Contrastive Distillation in Image Classification

被引:3
|
作者
Feng, Kai-Yuan [1 ,2 ]
Gong, Maoguo [1 ,3 ]
Pan, Ke [1 ,5 ]
Zhao, Hongyu [1 ,3 ]
Wu, Yue [1 ,4 ]
Sheng, Kai [1 ,2 ]
机构
[1] Xidian Univ, Key Lab Collaborat Intelligence Syst, Minist Educ, Xian 710071, Peoples R China
[2] Xidian Univ, Acad Adv Interdisciplinary Res, Xian 710071, Peoples R China
[3] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[4] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[5] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Computational modeling; Servers; Adaptation models; Training; Feature extraction; Performance evaluation; Multi-party learning; model sparsification; contrastive distillation; efficient communication; NEURAL-NETWORKS;
D O I
10.1109/TETCI.2023.3268713
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-party learning allows all parties to train a joint model under legal and practical constraints without private data transmission. Related research can perform multi-party learning tasks on homogeneous data through deep networks. However, due to the heterogeneity of data from different parties and the limitation of computational resources and costs, traditional approaches may affect the effectiveness of multi-party learning, and cannot provide a personalized network for each party. In addition, to reduce the computational cost and communication bandwidth of local models, there are still challenges in building an adaptive model from the private data of different parties. To address these challenges, we aim to apply a model sparsification strategy in multi-party learning. Model sparsification can not only reduce the computational overhead in local edge devices and the cost of communication and interaction between multi-party models. It can also develop privatized and personalized networks based on the heterogeneity of local data. We use the contrastive distillation method during training to reduce the distance between local and global models. In addition, we maintain the performance of the aggregation model from heterogeneous data. In brief, we developed an adaptive multi-party learning framework based on contrastive distillation, which can significantly reduce the communication cost in the learning process, improve the effectiveness of the aggregation model for local heterogeneous and unbalanced data, and make it easy to deploy in the limited edge devices. Finally, to verify the effectiveness of this framework, we experimented with the Fshion-MNIST, Cifar-10, and Cifar-100 datasets in different scenarios to verify the effectiveness of this framework.
引用
收藏
页码:150 / 163
页数:14
相关论文
共 50 条
  • [31] Secure and Efficient Federated Learning via Novel Authenticable Multi-Party Computation and Compressed Sensing
    Chen, Lvjun
    Xiao, Di
    Xiao, Xiangli
    Zhang, Yushu
    IEEE Transactions on Information Forensics and Security, 2024, 19 : 10141 - 10156
  • [32] Multi-party collaborative drug discovery via federated learning
    Huang D.
    Ye X.
    Sakurai T.
    Computers in Biology and Medicine, 2024, 171
  • [33] Communication-efficient federated learning via personalized filter pruning
    Min, Qi
    Luo, Fei
    Dong, Wenbo
    Gu, Chunhua
    Ding, Weichao
    INFORMATION SCIENCES, 2024, 678
  • [34] Communication-Efficient Privacy-Preserving Federated Learning via Knowledge Distillation for Human Activity Recognition Systems
    Gad, Gad
    Fadlullah, Zubair Md
    Rabie, Khaled
    Fouda, Mostafa M.
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1572 - 1578
  • [35] Communication-Efficient Coded Distributed Multi-Task Learning
    Tang, Hua
    Hu, Haoyang
    Yuan, Kai
    Wu, Youlong
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [36] Communication-Efficient and Federated Multi-Agent Reinforcement Learning
    Krouka, Mounssif
    Elgabli, Anis
    Ben Issaid, Chaouki
    Bennis, Mehdi
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (01) : 311 - 320
  • [37] PBFL: Communication-Efficient Federated Learning via Parameter Predicting
    Li, Kaiju
    Xiao, Chunhua
    COMPUTER JOURNAL, 2023, 66 (03): : 626 - 642
  • [38] Communication-efficient Federated Learning via Quantized Clipped SGD
    Jia, Ninghui
    Qu, Zhihao
    Ye, Baoliu
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT I, 2021, 12937 : 559 - 571
  • [39] Communication-Efficient Federated Learning via Quantized Compressed Sensing
    Oh, Yongjeong
    Lee, Namyoon
    Jeon, Yo-Seb
    Poor, H. Vincent
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (02) : 1087 - 1100
  • [40] FedCO: Communication-Efficient Federated Learning via Clustering Optimization
    Al-Saedi, Ahmed A.
    Boeva, Veselka
    Casalicchio, Emiliano
    FUTURE INTERNET, 2022, 14 (12)