What makes for uniformity for non-contrastive self-supervised learning?

被引:1
|
作者
Wang YinQuan [1 ,2 ]
Zhang XiaoPeng [3 ]
Tian Qi [3 ]
Lu JinHu [4 ]
机构
[1] Acad Math & Syst Sci, Chinese Acad Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[3] Huawei Inc, Shenzhen 518128, Peoples R China
[4] Beihang Univ, Sch Automat Sci & Elect Engn, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
contrastive learning; self-supervised learning; representation; uniformity; dynamics;
D O I
10.1007/s11431-021-2041-7
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Recent advances in self-supervised learning (SSL) have made remarkable progress, especially for contrastive methods that target pulling two augmented views of one image together and pushing the views of all other images away. In this setting, negative pairs play a key role in avoiding collapsed representation. Recent studies, such as those on bootstrap your own latent (BYOL) and SimSiam, have surprisingly achieved a comparable performance even without contrasting negative samples. However, a basic theoretical issue for SSL arises: how can different SSL methods avoid collapsed representation, and is there a common design principle? In this study, we look deep into current non-contrastive SSL methods and analyze the key factors that avoid collapses. To achieve this goal, we present a new indicator of uniformity metric and study the local dynamics of the indicator to diagnose collapses in different scenarios. Moreover, we present some principles for choosing a good predictor, such that we can explicitly control the optimization process. Our theoretical analysis result is validated on some widely used benchmarks spanning different-scale datasets. We also compare recent SSL methods and analyze their commonalities in avoiding collapses and some ideas for future algorithm designs.
引用
收藏
页码:2399 / 2408
页数:10
相关论文
共 50 条
  • [1] What makes for uniformity for non-contrastive self-supervised learning?
    YinQuan Wang
    XiaoPeng Zhang
    Qi Tian
    JinHu Lü
    Science China Technological Sciences, 2022, 65 : 2399 - 2408
  • [2] What makes for uniformity for non-contrastive self-supervised learning?
    WANG YinQuan
    ZHANG XiaoPeng
    TIAN Qi
    Lü JinHu
    Science China(Technological Sciences), 2022, 65 (10) : 2399 - 2408
  • [3] What makes for uniformity for non-contrastive self-supervised learning?
    WANG YinQuan
    ZHANG XiaoPeng
    TIAN Qi
    L JinHu
    Science China(Technological Sciences), 2022, (10) : 2399 - 2408
  • [4] The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning
    Wen, Zixin
    Li, Yuanzhi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering
    Miao, Runxuan
    Koyuncu, Erdem
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1070 - 1084
  • [6] Non-Contrastive Self-Supervised Learning of Utterance-Level Speech Representations
    Cho, Jaejin
    Pappagari, Raghavendra
    Zelasko, Piotr
    Velazquez, Laureano Moro
    Villalba, Jesus
    Dehak, Najim
    INTERSPEECH 2022, 2022, : 4028 - 4032
  • [7] Transferability of Non-contrastive Self-supervised Learning to Chronic Wound Image Recognition
    Akay, Julien Marteen
    Schenck, Wolfram
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VIII, 2024, 15023 : 427 - 444
  • [8] Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
    Balestriero, Randall
    LeCun, Yann
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] C3-DINO: Joint Contrastive and Non-Contrastive Self-Supervised Learning for Speaker Verification
    Zhang, Chunlei
    Yu, Dong
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1273 - 1283
  • [10] Bridging the Gap from Asymmetry Tricks to Decorrelation Principles in Non-contrastive Self-supervised Learning
    Liu, Kang-Jun
    Suganuma, Masanori
    Okatani, Takayuki
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,