What makes for uniformity for non-contrastive self-supervised learning?

被引:1
|
作者
Wang YinQuan [1 ,2 ]
Zhang XiaoPeng [3 ]
Tian Qi [3 ]
Lu JinHu [4 ]
机构
[1] Acad Math & Syst Sci, Chinese Acad Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[3] Huawei Inc, Shenzhen 518128, Peoples R China
[4] Beihang Univ, Sch Automat Sci & Elect Engn, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
contrastive learning; self-supervised learning; representation; uniformity; dynamics;
D O I
10.1007/s11431-021-2041-7
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Recent advances in self-supervised learning (SSL) have made remarkable progress, especially for contrastive methods that target pulling two augmented views of one image together and pushing the views of all other images away. In this setting, negative pairs play a key role in avoiding collapsed representation. Recent studies, such as those on bootstrap your own latent (BYOL) and SimSiam, have surprisingly achieved a comparable performance even without contrasting negative samples. However, a basic theoretical issue for SSL arises: how can different SSL methods avoid collapsed representation, and is there a common design principle? In this study, we look deep into current non-contrastive SSL methods and analyze the key factors that avoid collapses. To achieve this goal, we present a new indicator of uniformity metric and study the local dynamics of the indicator to diagnose collapses in different scenarios. Moreover, we present some principles for choosing a good predictor, such that we can explicitly control the optimization process. Our theoretical analysis result is validated on some widely used benchmarks spanning different-scale datasets. We also compare recent SSL methods and analyze their commonalities in avoiding collapses and some ideas for future algorithm designs.
引用
收藏
页码:2399 / 2408
页数:10
相关论文
共 50 条
  • [21] A comprehensive perspective of contrastive self-supervised learning
    Chen, Songcan
    Geng, Chuanxing
    FRONTIERS OF COMPUTER SCIENCE, 2021, 15 (04)
  • [22] A comprehensive perspective of contrastive self-supervised learning
    Songcan Chen
    Chuanxing Geng
    Frontiers of Computer Science, 2021, 15
  • [23] Slimmable Networks for Contrastive Self-supervised Learning
    Zhao, Shuai
    Zhu, Linchao
    Wang, Xiaohan
    Yang, Yi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1222 - 1237
  • [24] Self-supervised contrastive learning for itinerary recommendation
    Chen, Lei
    Zhu, Guixiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [25] Similarity Contrastive Estimation for Self-Supervised Soft Contrastive Learning
    Denize, Julien
    Rabarisoa, Jaonary
    Orcesi, Astrid
    Herault, Romain
    Canu, Stephane
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2705 - 2715
  • [26] Pathological Image Contrastive Self-supervised Learning
    Qin, Wenkang
    Jiang, Shan
    Luo, Lin
    RESOURCE-EFFICIENT MEDICAL IMAGE ANALYSIS, REMIA 2022, 2022, 13543 : 85 - 94
  • [27] Contrastive Transformation for Self-supervised Correspondence Learning
    Wang, Ning
    Zhou, Wengang
    Li, Hougiang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10174 - 10182
  • [28] Self-Supervised Contrastive Learning for Singing Voices
    Yakura, Hiromu
    Watanabe, Kento
    Goto, Masataka
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1614 - 1623
  • [29] JGCL: Joint Self-Supervised and Supervised Graph Contrastive Learning
    Akkas, Selahattin
    Azad, Ariful
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1099 - 1105
  • [30] Contrasting the landscape of contrastive and non-contrastive learning
    Pokle, Ashwini
    Tian, Jinjin
    Li, Yuchen
    Risteski, Andrej
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151