A Broad Study on the Transferability of Visual Representations with Contrastive Learning

被引:26
|
作者
Islam, Ashraful [1 ]
Chen, Chun-Fu [2 ,3 ]
Panda, Rameswar [2 ,3 ]
Karlinsky, Leonid [3 ]
Radke, Richard [1 ]
Feris, Rogerio [2 ,3 ]
机构
[1] Rensselaer Polytech Inst, Troy, NY 12181 USA
[2] MIT IBM Watson AI Lab, Cambridge, MA USA
[3] IBM Res, Armonk, NY USA
关键词
D O I
10.1109/ICCV48922.2021.00872
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tremendous progress has been made in visual representation learning, notably with the recent success of self-supervised contrastive learning methods. Supervised contrastive learning has also been shown to outperform its cross-entropy counterparts by leveraging labels for choosing where to contrast. However, there has been little work to explore the transfer capability of contrastive learning to a different domain. In this paper, we conduct a comprehensive study on the transferability of learned representations of different contrastive approaches for linear evaluation, full-network transfer, and few-shot recognition on 12 downstream datasets from different domains, and object detection tasks on MSCOCO and VOC0712. The results show that the contrastive approaches learn representations that are easily transferable to a different downstream task. We further observe that the joint objective of self-supervised contrastive loss with cross-entropy/supervised-contrastive loss leads to better transferability of these models over their supervised counterparts. Our analysis reveals that the representations learned from the contrastive approaches contain more low/mid-level semantics than cross-entropy models, which enables them to quickly adapt to a new task. Our codes and models will be publicly available to facilitate future research on transferability of visual representations.(1)
引用
收藏
页码:8825 / 8835
页数:11
相关论文
共 50 条
  • [1] A Simple Framework for Contrastive Learning of Visual Representations
    Chen, Ting
    Kornblith, Simon
    Norouzi, Mohammad
    Hinton, Geoffrey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [2] A Simple Framework for Contrastive Learning of Visual Representations
    Chen, Ting
    Kornblith, Simon
    Norouzi, Mohammad
    Hinton, Geoffrey
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [3] Multi-network contrastive learning of visual representations
    Long, Xianzhong
    Zhang, Zhiyi
    Li, Yun
    KNOWLEDGE-BASED SYSTEMS, 2022, 258
  • [4] Energy-Based Contrastive Learning of Visual Representations
    Kim, Beomsu
    Ye, Jong Chul
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations
    Van Gansbeke, Wouter
    Vandenhende, Simon
    Georgoulis, Stamatios
    Van Gool, Luc
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [6] Hyperbolic Contrastive Learning for Visual Representations beyond Objects
    Ge, Songwei
    Mishra, Shlok
    Kornblith, Simon
    Li, Chun-Liang
    Jacobs, David
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6840 - 6849
  • [7] Instance-dimension dual contrastive learning of visual representations
    Qingrui Liu
    Liantao Wang
    Qinxu Wang
    Jinxia Zhang
    Machine Vision and Applications, 2023, 34
  • [8] Instance-dimension dual contrastive learning of visual representations
    Liu, Qingrui
    Wang, Liantao
    Wang, Qinxu
    Zhang, Jinxia
    MACHINE VISION AND APPLICATIONS, 2023, 34 (05)
  • [9] Framework for Contrastive Learning Phases of Matter Based on Visual Representations
    Xiao-Qi Han
    Sheng-Song Xu
    Zhen Feng
    Rong-Qiang He
    Zhong-Yi Lu
    Chinese Physics Letters, 2023, 40 (02) : 74 - 78
  • [10] Framework for Contrastive Learning Phases of Matter Based on Visual Representations
    Han, Xiao-Qi
    Xu, Sheng-Song
    Feng, Zhen
    He, Rong-Qiang
    Lu, Zhong-Yi
    CHINESE PHYSICS LETTERS, 2023, 40 (02)