Deep Contrastive Learning: A Survey

被引:0
|
作者
Zhang C.-S. [1 ]
Chen J. [1 ]
Li Q.-L. [1 ]
Deng B.-Q. [1 ]
Wang J. [1 ]
Chen C.-G. [1 ]
机构
[1] Henan Key Lab of Big Data Analysis and Processing, Henan University, Kaifeng
来源
关键词
Contrastive learning; deep learning; feature extraction; metric learning; self-supervised learning;
D O I
10.16383/j.aas.c220421
中图分类号
学科分类号
摘要
In deep learning, it has been a crucial research concern on how to make use of the vast amount of unlabeled data to enhance the feature extraction capability of deep neural networks, for which contrastive learning is an effective approach. It has attracted significant research effort in the past few years, and a large number of contrastive learning methods have been proposed. In this paper, we survey recent advances and progress in contrastive learning in a comprehensive way. We first propose a new taxonomy for contrastive learning, in which we divide existing methods into 5 categories, including 1) sample pair construction methods, 2) image augmentation methods, 3) network architecture level methods, 4) loss function level methods, and 5) applications. Based on our proposed taxonomy, we systematically review the methods in each category, and analyze the characteristics and differences of representative methods. Moreover, we report and compare the performance of different contrastive learning methods on the benchmark datasets. We also retrospect the history of contrastive learning and discuss the differences and connections among contrastive learning, self-supervised learning, and metric learning. Finally, we discuss remaining issues and challenges in contrastive learning and outlook its future directions. © 2023 Science Press. All rights reserved.
引用
收藏
页码:15 / 39
页数:24
相关论文
共 109 条
  • [1] Jing L L, Tian Y L., Self-supervised visual feature learning with deep neural networks: A survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, 43, 11, pp. 4037-4058, (2021)
  • [2] Chen T, Kornblith S, Norouzi M, Et al., A simple framework for contrastive learning of visual representations, Proceedings of the 37th International Conference on Machine Learning (ICML), pp. 1597-1607, (2020)
  • [3] He K M, Fan H Q, Wu Y X, Xie S N, Girshick R., Momentum contrast for unsupervised visual representation learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9726-9735, (2020)
  • [4] Grill J B, Strub F, Altche F, Tallec C, Richemond P H, Buchatskaya E, Et al., Bootstrap your own latent a new approach to self-supervised learning, Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS), pp. 21271-21284, (2020)
  • [5] Caron M, Misra I, Mairal J, Goyal P, Bojanowski P, Joulin A., Unsupervised learning of visual features by contrasting cluster assignments, Proceedings of the 34th International Conference on Neural Information Processing Systems, pp. 9912-9924, (2020)
  • [6] Chen X L, He K M., Exploring simple Siamese representation learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15745-15753, (2021)
  • [7] Van Den Oord A, Li Y Z, Vinyals O., Representation learning with contrastive predictive coding, (2019)
  • [8] Khosla P, Teterwak P, Wang C, Et al., Supervised contrastive learning, Proceedings of the Advances in Neural Information Processing Systems, 33, pp. 18661-18673, (2020)
  • [9] Chen T, Kornblith S, Swersky K, Norouzi M, Hinton G., Big self-supervised models are strong semi-supervised learners, Proceedings of the 34th International Conference on Neural Information Processing Systems, (2020)
  • [10] Jaiswal A, Babu A R, Zadeh M Z, Banerjee D, Makedon F., A survey on contrastive self-supervised learning, Technologies, 9, 1, (2020)