A Broad Study on the Transferability of Visual Representations with Contrastive Learning

被引:26
|
作者
Islam, Ashraful [1 ]
Chen, Chun-Fu [2 ,3 ]
Panda, Rameswar [2 ,3 ]
Karlinsky, Leonid [3 ]
Radke, Richard [1 ]
Feris, Rogerio [2 ,3 ]
机构
[1] Rensselaer Polytech Inst, Troy, NY 12181 USA
[2] MIT IBM Watson AI Lab, Cambridge, MA USA
[3] IBM Res, Armonk, NY USA
关键词
D O I
10.1109/ICCV48922.2021.00872
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tremendous progress has been made in visual representation learning, notably with the recent success of self-supervised contrastive learning methods. Supervised contrastive learning has also been shown to outperform its cross-entropy counterparts by leveraging labels for choosing where to contrast. However, there has been little work to explore the transfer capability of contrastive learning to a different domain. In this paper, we conduct a comprehensive study on the transferability of learned representations of different contrastive approaches for linear evaluation, full-network transfer, and few-shot recognition on 12 downstream datasets from different domains, and object detection tasks on MSCOCO and VOC0712. The results show that the contrastive approaches learn representations that are easily transferable to a different downstream task. We further observe that the joint objective of self-supervised contrastive loss with cross-entropy/supervised-contrastive loss leads to better transferability of these models over their supervised counterparts. Our analysis reveals that the representations learned from the contrastive approaches contain more low/mid-level semantics than cross-entropy models, which enables them to quickly adapt to a new task. Our codes and models will be publicly available to facilitate future research on transferability of visual representations.(1)
引用
收藏
页码:8825 / 8835
页数:11
相关论文
共 50 条
  • [21] Looking Beyond Corners: Contrastive Learning of Visual Representations for Keypoint Detection and Description Extraction
    Siqueira, Henrique
    Ruhkamp, Patrick
    Halfaoui, Ibrahim
    Karmann, Markus
    Urfalioglu, Onay
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [22] Learning to Perturb for Contrastive Learning of Unsupervised Sentence Representations
    Zhou, Kun
    Zhou, Yuanhang
    Zhao, Wayne Xin
    Wen, Ji-Rong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3935 - 3944
  • [23] With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
    Dwibedi, Debidatta
    Aytar, Yusuf
    Tompson, Jonathan
    Sermanet, Pierre
    Zisserman, Andrew
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9568 - 9577
  • [24] Mutual Contrastive Learning for Visual Representation Learning
    Yang, Chuanguang
    An, Zhulin
    Cai, Linhang
    Xu, Yongjun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3045 - 3053
  • [25] A Simplified Framework for Contrastive Learning for Node Representations
    Hong, Ilgee
    Huy Tran
    Donnat, Claire
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 573 - 577
  • [26] Disentangled contrastive learning for fair graph representations
    Zhang, Guixian
    Yuan, Guan
    Cheng, Debo
    Liu, Lin
    Li, Jiuyong
    Zhang, Shichao
    NEURAL NETWORKS, 2025, 181
  • [27] Contrastive learning of T cell receptor representations
    Nagano, Yuta
    Pyo, Andrew G. T.
    Milighetti, Martina
    Henderson, James
    Shawe-Taylor, John
    Chain, Benny
    Tiffeau-Mayer, Andreas
    CELL SYSTEMS, 2025, 16 (01)
  • [28] Debiased Contrastive Learning of Unsupervised Sentence Representations
    Zhou, Kun
    Zhang, Beichen
    Zhao, Wayne Xin
    Wen, Ji-Rong
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6120 - 6130
  • [29] CURL: Contrastive Unsupervised Representations for Reinforcement Learning
    Laskin, Michael
    Srinivas, Aravind
    Abbeel, Pieter
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [30] MixIR: Mixing Input and Representations for Contrastive Learning
    Zhao, Tianhao
    Guo, Xiaoyang
    Lin, Yutian
    Du, Bo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,