Learning the Relation Between Similarity Loss and Clustering Loss in Self-Supervised Learning

被引:3
|
作者
Ge, Jidong [1 ]
Liu, Yuxiang [1 ]
Gui, Jie [2 ,3 ]
Fang, Lanting [3 ]
Lin, Ming [4 ,5 ]
Kwok, James Tin-Yau [6 ]
Huang, Liguo [7 ]
Luo, Bin [1 ]
机构
[1] Nanjing Univ, Software Inst, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[2] Southeast Univ, Sch Cyber Sci & Engn, Nanjing 210096, Peoples R China
[3] Purple Mt Labs, Nanjing 210000, Peoples R China
[4] Alibaba Grp, Bellevue, WA 98004 USA
[5] Amazon com LLC, Bellevue, WA 98004 USA
[6] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[7] Southern Methodist Univ, Dept Comp Sci, Dallas, TX 75205 USA
基金
美国国家科学基金会;
关键词
Self-supervised learning; image representation; image classification;
D O I
10.1109/TIP.2023.3276708
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning enables networks to learn discriminative features from massive data itself. Most state-of-the-art methods maximize the similarity between two augmentations of one image based on contrastive learning. By utilizing the consistency of two augmentations, the burden of manual annotations can be freed. Contrastive learning exploits instance-level information to learn robust features. However, the learned information is probably confined to different views of the same instance. In this paper, we attempt to leverage the similarity between two distinct images to boost representation in self-supervised learning. In contrast to instance-level information, the similarity between two distinct images may provide more useful information. Besides, we analyze the relation between similarity loss and feature-level cross-entropy loss. These two losses are essential for most deep learning methods. However, the relation between these two losses is not clear. Similarity loss helps obtain instance-level representation, while feature-level cross-entropy loss helps mine the similarity between two distinct images. We provide theoretical analyses and experiments to show that a suitable combination of these two losses can get state-of-the-art results. Code is available at https://github.com/guijiejie/ICCL.
引用
收藏
页码:3442 / 3454
页数:13
相关论文
共 50 条
  • [31] Self-Supervised Learning for Recommendation
    Huang, Chao
    Xia, Lianghao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5136 - 5139
  • [32] Credal Self-Supervised Learning
    Lienen, Julian
    Huellermeier, Eyke
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Quantum self-supervised learning
    Jaderberg, B.
    Anderson, L. W.
    Xie, W.
    Albanie, S.
    Kiffner, M.
    Jaksch, D.
    [J]. QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):
  • [34] A Self-Supervised Deep Learning Framework for Unsupervised Few-Shot Learning and Clustering
    Zhang, Hongjing
    Zhan, Tianyang
    Davidson, Ian
    [J]. PATTERN RECOGNITION LETTERS, 2021, 148 : 75 - 81
  • [35] A Self-Supervised Deep Learning Framework for Unsupervised Few-Shot Learning and Clustering
    Zhang, Hongjing
    Zhan, Tianyang
    Davidson, Ian
    [J]. Pattern Recognition Letters, 2021, 148 : 75 - 81
  • [36] Noise Suppression With Similarity-Based Self-Supervised Deep Learning
    Niu, Chuang
    Li, Mengzhou
    Fan, Fenglei
    Wu, Weiwen
    Guo, Xiaodong
    Lyu, Qing
    Wang, Ge
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (06) : 1590 - 1602
  • [37] Patient Similarity using Electronic Health Records and Self-supervised Learning
    Dao, Hong N.
    Paik, Incheon
    [J]. 2023 IEEE 16TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP, MCSOC, 2023, : 105 - 110
  • [38] Cross Pixel Optical-Flow Similarity for Self-supervised Learning
    Mahendran, Aravindh
    Thewlis, James
    Vedaldi, Andrea
    [J]. COMPUTER VISION - ACCV 2018, PT V, 2019, 11365 : 99 - 116
  • [39] Self-Supervised Representation Learning Framework for Remote Physiological Measurement Using Spatiotemporal Augmentation Loss
    Wang, Hao
    Ahn, Euijoon
    Kim, Jinman
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2431 - 2439
  • [40] Self-Supervised Clustering based on Manifold Learning and Graph Convolutional Networks
    Lopes, Leonardo Tadeu
    Guimaraes Pedronette, Daniel Carlos
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 5623 - 5632