On Compositions of Transformations in Contrastive Self-Supervised Learning

被引:17
|
作者
Patrick, Mandela [1 ,2 ]
Asano, Yuki M. [2 ]
Kuznetsova, Polina [1 ]
Fong, Ruth [2 ]
Henriques, Joao F. [2 ]
Zweig, Geoffrey [1 ]
Vedaldi, Andrea [1 ,2 ]
机构
[1] Facebook AI Res, Menlo Pk, CA 94025 USA
[2] Univ Oxford, Visual Geometry Grp, Oxford, England
关键词
D O I
10.1109/ICCV48922.2021.00944
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the image domain, excellent representations can be learned by inducing invariance to content-preserving transformations via noise contrastive learning. In this paper, we generalize contrastive learning to a wider set of transformations, and their compositions, for which either invariance or distinctiveness is sought. We show that it is not immediately obvious how existing methods such as SimCLR can be extended to do so. Instead, we introduce a number of formal requirements that all contrastive formulations must satisfy, and propose a practical construction which satisfies these requirements. In order to maximise the reach of this analysis, we express all components of noise contrastive formulations as the choice of certain generalized transformations of the data (GDTs), including data sampling. We then consider videos as an example of data in which a large variety of transformations are applicable, accounting for the extra modalities - for which we analyze audio and text - and the dimension of time. We find that being invariant to certain transformations and distinctive to others is critical to learning effective video representations, improving the state-of-the-art for multiple benchmarks by a large margin, and even surpassing supervised pretraining. Code and pretrained models are available(1).
引用
收藏
页码:9557 / 9567
页数:11
相关论文
共 50 条
  • [1] Adversarial Self-Supervised Contrastive Learning
    Kim, Minseon
    Tack, Jihoon
    Hwang, Sung Ju
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [2] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    [J]. TECHNOLOGIES, 2021, 9 (01)
  • [3] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [4] A comprehensive perspective of contrastive self-supervised learning
    Songcan CHEN
    Chuanxing GENG
    [J]. Frontiers of Computer Science., 2021, (04) - 104
  • [5] Contrastive Self-supervised Learning for Graph Classification
    Zeng, Jiaqi
    Xie, Pengtao
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10824 - 10832
  • [6] Slimmable Networks for Contrastive Self-supervised Learning
    Zhao, Shuai
    Zhu, Linchao
    Wang, Xiaohan
    Yang, Yi
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024,
  • [7] A comprehensive perspective of contrastive self-supervised learning
    Songcan Chen
    Chuanxing Geng
    [J]. Frontiers of Computer Science, 2021, 15
  • [8] Self-supervised contrastive learning on agricultural images
    Guldenring, Ronja
    Nalpantidis, Lazaros
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 191
  • [9] Group Contrastive Self-Supervised Learning on Graphs
    Xu, Xinyi
    Deng, Cheng
    Xie, Yaochen
    Ji, Shuiwang
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3169 - 3180
  • [10] A comprehensive perspective of contrastive self-supervised learning
    Chen, Songcan
    Geng, Chuanxing
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2021, 15 (04)