Model Sparsification for Communication-Efficient Multi-Party Learning via Contrastive Distillation in Image Classification

被引:3
|
作者
Feng, Kai-Yuan [1 ,2 ]
Gong, Maoguo [1 ,3 ]
Pan, Ke [1 ,5 ]
Zhao, Hongyu [1 ,3 ]
Wu, Yue [1 ,4 ]
Sheng, Kai [1 ,2 ]
机构
[1] Xidian Univ, Key Lab Collaborat Intelligence Syst, Minist Educ, Xian 710071, Peoples R China
[2] Xidian Univ, Acad Adv Interdisciplinary Res, Xian 710071, Peoples R China
[3] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[4] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[5] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Computational modeling; Servers; Adaptation models; Training; Feature extraction; Performance evaluation; Multi-party learning; model sparsification; contrastive distillation; efficient communication; NEURAL-NETWORKS;
D O I
10.1109/TETCI.2023.3268713
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-party learning allows all parties to train a joint model under legal and practical constraints without private data transmission. Related research can perform multi-party learning tasks on homogeneous data through deep networks. However, due to the heterogeneity of data from different parties and the limitation of computational resources and costs, traditional approaches may affect the effectiveness of multi-party learning, and cannot provide a personalized network for each party. In addition, to reduce the computational cost and communication bandwidth of local models, there are still challenges in building an adaptive model from the private data of different parties. To address these challenges, we aim to apply a model sparsification strategy in multi-party learning. Model sparsification can not only reduce the computational overhead in local edge devices and the cost of communication and interaction between multi-party models. It can also develop privatized and personalized networks based on the heterogeneity of local data. We use the contrastive distillation method during training to reduce the distance between local and global models. In addition, we maintain the performance of the aggregation model from heterogeneous data. In brief, we developed an adaptive multi-party learning framework based on contrastive distillation, which can significantly reduce the communication cost in the learning process, improve the effectiveness of the aggregation model for local heterogeneous and unbalanced data, and make it easy to deploy in the limited edge devices. Finally, to verify the effectiveness of this framework, we experimented with the Fshion-MNIST, Cifar-10, and Cifar-100 datasets in different scenarios to verify the effectiveness of this framework.
引用
收藏
页码:150 / 163
页数:14
相关论文
共 50 条
  • [41] EFMVFL: An Efficient and Flexible Multi-party Vertical Federated Learning without a Third Party
    Huang, Yimin
    Wang, Wanwan
    Zhao, Xingying
    Wang, Yukun
    Feng, Xinyu
    He, Hao
    Yao, Ming
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (03)
  • [42] MMPL: Multi-Objective Multi-Party Learning via Diverse Steps
    Zhang, Yuanqiao
    Gong, Maoguo
    Gao, Yuan
    Li, Hao
    Wang, Lei
    Wang, Yixin
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 684 - 696
  • [43] FedGK: Communication-Efficient Federated Learning through Group-Guided Knowledge Distillation
    Zhang, Wenjun
    Liu, Xiaoli
    Tarkoma, Sasu
    ACM Transactions on Internet Technology, 2024, 24 (04)
  • [44] IoT Device Friendly and Communication-Efficient Federated Learning via Joint Model Pruning and Quantization
    Prakash, Pavana
    Ding, Jiahao
    Chen, Rui
    Qin, Xiaoqi
    Shu, Minglei
    Cui, Qimei
    Guo, Yuanxiong
    Pan, Miao
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15): : 13638 - 13650
  • [45] Communication-Efficient and Model-Heterogeneous Personalized Federated Learning via Clustered Knowledge Transfer
    Cho, Yae Jee
    Wang, Jianyu
    Chirvolu, Tarun
    Joshi, Gauri
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2023, 17 (01) : 234 - 247
  • [46] Communication-Efficient Federated Learning via Regularized Sparse Random Networks
    Mestoukirdi, Mohamad
    Esrafilian, Omid
    Gesbert, David
    Li, Qianrui
    Gresset, Nicolas
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (07) : 1574 - 1578
  • [47] ADAPTIVE QUANTIZATION OF MODEL UPDATES FOR COMMUNICATION-EFFICIENT FEDERATED LEARNING
    Jhunjhunwala, Divyansh
    Gadhikar, Advait
    Joshi, Gauri
    Eldar, Yonina C.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3110 - 3114
  • [48] FedTCR: communication-efficient federated learning via taming computing resources
    Kaiju Li
    Hao Wang
    Qinghua Zhang
    Complex & Intelligent Systems, 2023, 9 : 5199 - 5219
  • [49] Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients
    Sun, Jun
    Chen, Tianyi
    Giannakis, Georgios B.
    Yang, Zaiyue
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [50] Communication-efficient federated learning method via redundant data elimination
    Li K.
    Xu Q.
    Wang H.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (05): : 79 - 93