Federated Self-Supervised Learning Based on Prototypes Clustering Contrastive Learning for Internet of Vehicles Applications

被引:0
|
作者
Dai, Cheng [1 ]
Wei, Shuai [1 ]
Dai, Shengxin [1 ]
Garg, Sahil [2 ,3 ]
Kaddoum, Georges [2 ,4 ]
Shamim Hossain, M. [5 ]
机构
[1] Sichuan University, School of Computer Science, Chengdu,610042, China
[2] École de Technologie Supérieure, Electrical Engineering Department, Montreal,QC,H3C 1K3, Canada
[3] Chitkara University Institute of Engineering and Technology, Chitkara University, Centre for Research Impact and Outcome, Rajpura,140401, India
[4] Lebanese American University, Artificial Intelligence and Cyber Systems Research Center, Beirut,03797751, Lebanon
[5] King Saud University, College of Computer and Information Sciences, Department of Software Engineering, Riyadh,12372, Saudi Arabia
基金
中国国家自然科学基金;
关键词
Adversarial machine learning - Federated learning - Self-supervised learning - Supervised learning;
D O I
10.1109/JIOT.2024.3453336
中图分类号
学科分类号
摘要
Federated learning (FL) is a novel paradigm for distribute edge intelligence for the Internet-of-Vehicles (IoV) application, which can enable superior performance in model training without the need to share local data. However, in the actual architecture of FL, the existence of nonindependent and identically distributed (non-IID) data at the edge device, along with the involvement of randomly participating distributed nodes, can result in model bias and a subsequent decrease in overall performance. To solve this problem, a new federated self-supervised learning method based on prototypes clustering contrastive learning (FedPCC) is proposed, which can effectively addresses the issue of asynchronous edge training and global model bias by introducing an unsupervised prototypes layer. The prototypes layer maps edge features to a global space and performs clustering, facilitating the new aggregation method of global prototypes on the server. Then, models from other components are aggregated based on data weight. Besides that, during the parameter deployment phase, we replace the prototype layer to acquire global knowledge, while employing momentum updates to preserve the local knowledge of the other components. Finally, to assess the efficacy of our proposed approach, we carried out comprehensive experiments across the various data sets. The findings show that our method gains state-of-the-art performance, which also validates its effectiveness. © 2024 IEEE.
引用
收藏
页码:4692 / 4700
相关论文
共 50 条
  • [31] JGCL: Joint Self-Supervised and Supervised Graph Contrastive Learning
    Akkas, Selahattin
    Azad, Ariful
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1099 - 1105
  • [32] Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning
    Qiu, Shuang
    Wang, Lingxiao
    Bai, Chenjia
    Yang, Zhuoran
    Wang, Zhaoran
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [33] FundusNet, A self-supervised contrastive learning framework for Fundus Feature Learning
    Mojab, Nooshin
    Alam, Minhaj
    Hallak, Joelle
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [34] Self-Supervised Contrastive Learning for Volcanic Unrest Detection
    Bountos, Nikolaos Ioannis
    Papoutsis, Ioannis
    Michail, Dimitrios
    Anantrasirichai, Nantheera
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [35] Self-Supervised Contrastive Learning In Spiking Neural Networks
    Bahariasl, Yeganeh
    Kheradpisheh, Saeed Reza
    PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, : 181 - 185
  • [36] Self-supervised Contrastive Learning for Predicting Game Strategies
    Lee, Young Jae
    Baek, Insung
    Jo, Uk
    Kim, Jaehoon
    Bae, Jinsoo
    Jeong, Keewon
    Kim, Seoung Bum
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2023, 542 : 136 - 147
  • [37] Contrasting Contrastive Self-Supervised Representation Learning Pipelines
    Kotar, Klemen
    Ilharco, Gabriel
    Schmidt, Ludwig
    Ehsani, Kiana
    Mottaghi, Roozbeh
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9929 - 9939
  • [38] Contrastive Self-Supervised Learning for Skeleton Action Recognition
    Gao, Xuehao
    Yang, Yang
    Du, Shaoyi
    NEURIPS 2020 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 148, 2020, 148 : 51 - 61
  • [39] CONTRASTIVE SELF-SUPERVISED LEARNING FOR WIRELESS POWER CONTROL
    Naderializadeh, Navid
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4965 - 4969
  • [40] Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning
    Wen, Zixin
    Li, Yuanzhi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139