Contrasting Contrastive Self-Supervised Representation Learning Pipelines

被引:10
|
作者
Kotar, Klemen [1 ]
Ilharco, Gabriel [2 ]
Schmidt, Ludwig [2 ]
Ehsani, Kiana [1 ]
Mottaghi, Roozbeh [1 ,2 ]
机构
[1] PRIOR Allen Inst AI, Seattle, WA 98103 USA
[2] Univ Washington, Seattle, WA 98195 USA
关键词
D O I
10.1109/ICCV48922.2021.00980
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the past few years, we have witnessed remarkable breakthroughs in self-supervised representation learning. Despite the success and adoption of representations learned through this paradigm, much is yet to be understood about how different training methods and datasets influence performance on downstream tasks. In this paper, we analyze contrastive approaches as one of the most successful and popular variants of self-supervised representation learning. We perform this analysis from the perspective of the training algorithms, pre-training datasets and end tasks. We examine over 700 training experiments including 30 encoders, 4 pre-training datasets and 20 diverse downstream tasks. Our experiments address various questions regarding the performance of self-supervised models compared to their supervised counterparts, current benchmarks used for evaluation, and the effect of the pre-training data on end task performance. Our Visual Representation Benchmark (ViRB) is available at: https://github.com/allenai/virb.
引用
收藏
页码:9929 / 9939
页数:11
相关论文
共 50 条
  • [1] Grouped Contrastive Learning of Self-Supervised Sentence Representation
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Peng, Dezhong
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [2] Self-supervised contrastive representation learning for semantic segmentation
    Liu B.
    Cai H.
    Wang Y.
    Chen X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2024, 51 (01): : 125 - 134
  • [3] CONTRASTIVE SEPARATIVE CODING FOR SELF-SUPERVISED REPRESENTATION LEARNING
    Wang, Jun
    Lam, Max W. Y.
    Su, Dan
    Yu, Dong
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3865 - 3869
  • [4] CONTRASTIVE HEARTBEATS: CONTRASTIVE LEARNING FOR SELF-SUPERVISED ECG REPRESENTATION AND PHENOTYPING
    Wei, Crystal T.
    Hsieh, Ming-En
    Liu, Chien-Liang
    Tseng, Vincent S.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1126 - 1130
  • [5] Motion Sensitive Contrastive Learning for Self-supervised Video Representation
    Ni, Jingcheng
    Zhou, Nan
    Qin, Jie
    Wu, Qian
    Liu, Junqi
    Li, Boxun
    Huang, Di
    COMPUTER VISION - ECCV 2022, PT XXXV, 2022, 13695 : 457 - 474
  • [6] Contrastive Self-Supervised Learning With Smoothed Representation for Remote Sensing
    Jung, Heechul
    Oh, Yoonju
    Jeong, Seongho
    Lee, Chaehyeon
    Jeon, Taegyun
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [7] Contrastive Self-supervised Representation Learning Using Synthetic Data
    She, Dong-Yu
    Xu, Kun
    INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2021, 18 (04) : 556 - 567
  • [8] Contrastive Self-supervised Representation Learning Using Synthetic Data
    Dong-Yu She
    Kun Xu
    International Journal of Automation and Computing, 2021, 18 : 556 - 567
  • [9] Contrastive Self-supervised Representation Learning Using Synthetic Data
    Dong-Yu She
    Kun Xu
    International Journal of Automation and Computing , 2021, (04) : 556 - 567
  • [10] Self-supervised Segment Contrastive Learning for Medical Document Representation
    Abro, Waheed Ahmed
    Kteich, Hanane
    Bouraoui, Zied
    ARTIFICIAL INTELLIGENCE IN MEDICINE, PT I, AIME 2024, 2024, 14844 : 312 - 321