Contrasting Contrastive Self-Supervised Representation Learning Pipelines

被引:10
|
作者
Kotar, Klemen [1 ]
Ilharco, Gabriel [2 ]
Schmidt, Ludwig [2 ]
Ehsani, Kiana [1 ]
Mottaghi, Roozbeh [1 ,2 ]
机构
[1] PRIOR Allen Inst AI, Seattle, WA 98103 USA
[2] Univ Washington, Seattle, WA 98195 USA
关键词
D O I
10.1109/ICCV48922.2021.00980
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the past few years, we have witnessed remarkable breakthroughs in self-supervised representation learning. Despite the success and adoption of representations learned through this paradigm, much is yet to be understood about how different training methods and datasets influence performance on downstream tasks. In this paper, we analyze contrastive approaches as one of the most successful and popular variants of self-supervised representation learning. We perform this analysis from the perspective of the training algorithms, pre-training datasets and end tasks. We examine over 700 training experiments including 30 encoders, 4 pre-training datasets and 20 diverse downstream tasks. Our experiments address various questions regarding the performance of self-supervised models compared to their supervised counterparts, current benchmarks used for evaluation, and the effect of the pre-training data on end task performance. Our Visual Representation Benchmark (ViRB) is available at: https://github.com/allenai/virb.
引用
收藏
页码:9929 / 9939
页数:11
相关论文
共 50 条
  • [41] Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification
    Eldele, Emadeldeen
    Ragab, Mohamed
    Chen, Zhenghua
    Wu, Min
    Kwoh, Chee-Keong
    Li, Xiaoli
    Guan, Cuntai
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) : 15604 - 15618
  • [42] A comprehensive perspective of contrastive self-supervised learning
    Songcan CHEN
    Chuanxing GENG
    Frontiers of Computer Science, 2021, (04) : 102 - 104
  • [43] On Compositions of Transformations in Contrastive Self-Supervised Learning
    Patrick, Mandela
    Asano, Yuki M.
    Kuznetsova, Polina
    Fong, Ruth
    Henriques, Joao F.
    Zweig, Geoffrey
    Vedaldi, Andrea
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9557 - 9567
  • [44] Contrastive Self-supervised Learning for Graph Classification
    Zeng, Jiaqi
    Xie, Pengtao
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10824 - 10832
  • [45] Group Contrastive Self-Supervised Learning on Graphs
    Xu, Xinyi
    Deng, Cheng
    Xie, Yaochen
    Ji, Shuiwang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3169 - 3180
  • [46] Self-supervised contrastive learning on agricultural images
    Guldenring, Ronja
    Nalpantidis, Lazaros
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 191
  • [47] A comprehensive perspective of contrastive self-supervised learning
    Chen, Songcan
    Geng, Chuanxing
    FRONTIERS OF COMPUTER SCIENCE, 2021, 15 (04)
  • [48] Self-supervised contrastive learning for itinerary recommendation
    Chen, Lei
    Zhu, Guixiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [49] A comprehensive perspective of contrastive self-supervised learning
    Songcan Chen
    Chuanxing Geng
    Frontiers of Computer Science, 2021, 15
  • [50] Slimmable Networks for Contrastive Self-supervised Learning
    Zhao, Shuai
    Zhu, Linchao
    Wang, Xiaohan
    Yang, Yi
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (03) : 1222 - 1237