Model Sparsification for Communication-Efficient Multi-Party Learning via Contrastive Distillation in Image Classification

被引:3
|
作者
Feng, Kai-Yuan [1 ,2 ]
Gong, Maoguo [1 ,3 ]
Pan, Ke [1 ,5 ]
Zhao, Hongyu [1 ,3 ]
Wu, Yue [1 ,4 ]
Sheng, Kai [1 ,2 ]
机构
[1] Xidian Univ, Key Lab Collaborat Intelligence Syst, Minist Educ, Xian 710071, Peoples R China
[2] Xidian Univ, Acad Adv Interdisciplinary Res, Xian 710071, Peoples R China
[3] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[4] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
[5] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Computational modeling; Servers; Adaptation models; Training; Feature extraction; Performance evaluation; Multi-party learning; model sparsification; contrastive distillation; efficient communication; NEURAL-NETWORKS;
D O I
10.1109/TETCI.2023.3268713
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-party learning allows all parties to train a joint model under legal and practical constraints without private data transmission. Related research can perform multi-party learning tasks on homogeneous data through deep networks. However, due to the heterogeneity of data from different parties and the limitation of computational resources and costs, traditional approaches may affect the effectiveness of multi-party learning, and cannot provide a personalized network for each party. In addition, to reduce the computational cost and communication bandwidth of local models, there are still challenges in building an adaptive model from the private data of different parties. To address these challenges, we aim to apply a model sparsification strategy in multi-party learning. Model sparsification can not only reduce the computational overhead in local edge devices and the cost of communication and interaction between multi-party models. It can also develop privatized and personalized networks based on the heterogeneity of local data. We use the contrastive distillation method during training to reduce the distance between local and global models. In addition, we maintain the performance of the aggregation model from heterogeneous data. In brief, we developed an adaptive multi-party learning framework based on contrastive distillation, which can significantly reduce the communication cost in the learning process, improve the effectiveness of the aggregation model for local heterogeneous and unbalanced data, and make it easy to deploy in the limited edge devices. Finally, to verify the effectiveness of this framework, we experimented with the Fshion-MNIST, Cifar-10, and Cifar-100 datasets in different scenarios to verify the effectiveness of this framework.
引用
收藏
页码:150 / 163
页数:14
相关论文
共 50 条
  • [21] Prototype Similarity Distillation for Communication-Efficient Federated Unsupervised Representation Learning
    Zhang C.
    Xie Y.
    Chen T.
    Mao W.
    Yu B.
    IEEE Transactions on Knowledge and Data Engineering, 2024, 36 (11) : 1 - 13
  • [22] Trading Permutation Invariance for Communication in Multi-Party Non-Locality Distillation
    Ebbe, Helen
    Wolf, Stefan
    2014 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2014, : 1479 - 1483
  • [23] Efficient Secure Multi-Party Protocols for Decision Tree Classification
    Ichikawa, Atsunori
    Ogata, Wakaha
    Hamada, Koki
    Kikuchi, Ryo
    INFORMATION SECURITY AND PRIVACY, ACISP 2019, 2019, 11547 : 362 - 380
  • [24] Secure and efficient federated learning via novel multi-party computation and compressed sensing
    Chen, Lvjun
    Xiao, Di
    Yu, Zhuyang
    Zhang, Maolan
    INFORMATION SCIENCES, 2024, 667
  • [25] Communication-Efficient Federated Learning via Predictive Coding
    Yue, Kai
    Jin, Richeng
    Wong, Chau-Wai
    Dai, Huaiyu
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (03) : 369 - 380
  • [26] Multi-view representation for pathological image classification via contrastive learning
    Chen, Kaitao
    Sun, Shiliang
    Zhao, Jing
    Wang, Feng
    Zhang, Qingjiu
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, : 2285 - 2296
  • [27] SPEEDSTER: An Efficient Multi-party State Channel via Enclaves
    Liao, Jinghui
    Zhang, Fengwei
    Sun, Wenhai
    Shi, Weisong
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 637 - 651
  • [28] Communication-Efficient and Attack-Resistant Federated Edge Learning With Dataset Distillation
    Zhou, Yanlin
    Ma, Xiyao
    Wu, Dapeng
    Li, Xiaolin
    IEEE TRANSACTIONS ON CLOUD COMPUTING, 2023, 11 (03) : 2517 - 2528
  • [29] To Distill or Not to Distill: Toward Fast, Accurate, and Communication-Efficient Federated Distillation Learning
    Zhang, Yuan
    Zhang, Wenlong
    Pu, Lingjun
    Lin, Tao
    Yan, Jinyao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06) : 10040 - 10053
  • [30] FedADP: Communication-Efficient by Model Pruning for Federated Learning
    Liu, Haiyang
    Shi, Yuliang
    Su, Zhiyuan
    Zhang, Kun
    Wang, Xinjun
    Yan, Zhongmin
    Kong, Fanyu
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3093 - 3098