Align Representations with Base: A New Approach to Self-Supervised Learning

被引:1
|
作者
Zhang, Shaofeng [1 ]
Qiu, Lyn [1 ]
Zhu, Feng [2 ]
Yan, Junchi [1 ]
Zhang, Hengrui [1 ]
Zhao, Rui [1 ,2 ,3 ]
Li, Hongyang [2 ]
Yang, Xiaokang [1 ]
机构
[1] Shanghai Jiao Tong Univ, Artificial Intelligence Inst, MoE Key Lab Artificial Intelligence, Shanghai, Peoples R China
[2] SenseTime Res, Hong Kong, Peoples R China
[3] Shanghai Jiao Tong Univ, Qing Yuan Res Inst, Shanghai, Peoples R China
关键词
D O I
10.1109/CVPR52688.2022.01610
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing symmetric contrastive learning methods suffer from collapses (complete and dimensional) or quadratic complexity of objectives. Departure from these methods which maximize mutual information of two generated views, along either instance or feature dimension, the proposed paradigm introduces intermediate variables at the feature level, and maximizes the consistency between variables and representations of each view. Spec(fically, the proposed intermediate variables are the nearest group of base vectors to representations. Hence, we call the proposed method ARB (Align Representations with Base). Compared with other symmetric approaches, ARB 1) does not require negative pairs, which leads the complexity of the overall objective function is in linear order, 2) reduces feature redundancy, increasing the information density of training samples, 3) is more robust to output dimension size, which outperforms previous feature-wise arts over 28% Top-1 accuracy on ImageNet-100 under low-dimension settings.
引用
收藏
页码:16579 / 16588
页数:10
相关论文
共 50 条
  • [1] Learning Representations for New Sound Classes With Continual Self-Supervised Learning
    Wang, Zhepei
    Subakan, Cem
    Jiang, Xilin
    Wu, Junkai
    Tzinis, Efthymios
    Ravanelli, Mirco
    Smaragdis, Paris
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2607 - 2611
  • [2] Self-Supervised Learning of Smart Contract Representations
    Yang, Shouliang
    Gu, Xiaodong
    Shen, Beijun
    30TH IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2022), 2022, : 82 - 93
  • [3] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155
  • [4] Contrast and Order Representations for Video Self-supervised Learning
    Hu, Kai
    Shao, Jie
    Liu, Yuan
    Raj, Bhiksha
    Savvides, Marios
    Shen, Zhiqiang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7919 - 7929
  • [5] Self-supervised graph representations with generative adversarial learning
    Sun, Xuecheng
    Wang, Zonghui
    Lu, Zheming
    Lu, Ziqian
    NEUROCOMPUTING, 2024, 592
  • [6] Learning Action Representations for Self-supervised Visual Exploration
    Oh, Changjae
    Cavallaro, Andrea
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 5873 - 5879
  • [7] Self-supervised learning of Dynamic Representations for Static Images
    Song, Siyang
    Sanchez, Enrique
    Shen, Linlin
    Valstar, Michel
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 1619 - 1626
  • [8] Deep Bregman divergence for self-supervised representations learning
    Rezaei, Mina
    Soleymani, Farzin
    Bischl, Bernd
    Azizi, Shekoofeh
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 235
  • [9] Self-Supervised Learning of Pretext-Invariant Representations
    Misra, Ishan
    van der Maaten, Laurens
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6706 - 6716
  • [10] Learning Self-Supervised Multimodal Representations of Human Behaviour
    Shukla, Abhinav
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4748 - 4751