Similarity Contrastive Estimation for Self-Supervised Soft Contrastive Learning

被引:16
|
作者
Denize, Julien [1 ]
Rabarisoa, Jaonary [1 ]
Orcesi, Astrid [1 ]
Herault, Romain [2 ]
Canu, Stephane [2 ]
机构
[1] Univ Paris Saclay, CEA, LIST, F-91120 Palaiseau, France
[2] Normandie Univ, INSA Rouen, LITIS, F-76801 St Etienne Du Rouvray, France
来源
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2023年
关键词
D O I
10.1109/WACV56688.2023.00273
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive representation learning has proven to be an effective self-supervised learning method. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations, or semantic similarity, between the instances. Contrastive learning implicitly learns relations but considering all negatives as noise harms the quality of the learned relations. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive learning one. Instead of hard classifying positives and negatives, we estimate from one view of a batch a continuous distribution to push or pull instances based on their semantic similarities. This target similarity distribution is sharpened to eliminate noisy relations. The model predicts for each instance, from another view, the target distribution while contrasting its positive with negatives. Experimental results show that SCE is Top-1 on the ImageNet linear evaluation protocol at 100 pretraining epochs with 72.1% accuracy and is competitive with state-of-the-art algorithms by reaching 75.4% for 200 epochs with multi-crop. We also show that SCE is able to generalize to several tasks. Source code is available here: https://github.com/CEA-LIST/SCE.
引用
收藏
页码:2705 / 2715
页数:11
相关论文
共 50 条
  • [41] Contrastive Self-Supervised Learning for Optical Music Recognition
    Penarrubia, Carlos
    Valero-Mas, Jose J.
    Calvo-Zaragoza, Jorge
    DOCUMENT ANALYSIS SYSTEMS, DAS 2024, 2024, 14994 : 312 - 326
  • [42] DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
    Nguyen, Thanh
    Pham, Trung Xuan
    Zhang, Chaoning
    Luu, Tung M.
    Vu, Thang
    Yoo, Chang D.
    IEEE ACCESS, 2023, 11 : 21534 - 21545
  • [43] Contrastive Similarity Matching for Supervised Learning
    Qin, Shanshan
    Mudur, Nayantara
    Pehlevan, Cengiz
    NEURAL COMPUTATION, 2021, 33 (05) : 1300 - 1328
  • [44] Generative and Contrastive Self-Supervised Learning for Graph Anomaly Detection
    Zheng, Yu
    Jin, Ming
    Liu, Yixin
    Chi, Lianhua
    Phan, Khoa T.
    Chen, Yi-Ping Phoebe
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (12) : 12220 - 12233
  • [45] Self-supervised Variational Contrastive Learning with Applications to Face Understanding
    Yavuz, Mehmet Can
    Yanikoglu, Berrin
    2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024, 2024,
  • [46] Feature Augmentation for Self-supervised Contrastive Learning: A Closer Look
    Zhang, Yong
    Zhu, Rui
    Zhang, Shifeng
    Zhou, Xu
    Chen, Shifeng
    Chen, Xiaofan
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [47] A NOVEL CONTRASTIVE LEARNING FRAMEWORK FOR SELF-SUPERVISED ANOMALY DETECTION
    Li, Jingze
    Lian, Zhichao
    Li, Min
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3366 - 3370
  • [48] Self-supervised Graph Contrastive Learning for Video Question Answering
    Yao X.
    Gao J.-Y.
    Xu C.-S.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (05): : 2083 - 2100
  • [49] Part Aware Contrastive Learning for Self-Supervised Action Recognition
    Hua, Yilei
    Wu, Wenhan
    Zheng, Ce
    Lu, Aidong
    Liu, Mengyuan
    Chen, Chen
    Wu, Shiqian
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 855 - 863
  • [50] Contrastive and Non-Contrastive Strategies for Federated Self-Supervised Representation Learning and Deep Clustering
    Miao, Runxuan
    Koyuncu, Erdem
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2024, 18 (06) : 1070 - 1084