Group-based siamese self-supervised learning

被引:0
|
作者
Li, Zhongnian [1 ]
Wang, Jiayu [1 ]
Geng, Qingcong [1 ]
Xu, Xinzheng [1 ]
机构
[1] China Univ Min & Technol, Sch Comp Sci & Technol, Xuzhou 221116, Peoples R China
来源
ELECTRONIC RESEARCH ARCHIVE | 2024年 / 32卷 / 08期
关键词
self-supervised learning; average feature; multiple views; classification tasks; siamese network;
D O I
10.3934/era.2024226
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
In this paper, we introduced a novel group self-supervised learning approach designed to improve visual representation learning. This new method aimed to rectify the limitations observed in conventional self-supervised learning. Traditional methods tended to focus on embedding distortion- invariant in single-view features. However, our belief was that a better representation can be achieved by creating a group of features derived from multiple views. To expand the siamese self-supervised architecture, we increased the number of image instances in each crop, enabling us to obtain an average feature from a group of views to use as a distortion, invariant embedding. The training efficiency ffi ciency has greatly increased with rapid convergence. When combined with a robust linear protocol, this group self-supervised learning model achieved competitive results in CIFAR-10, CIFAR-100, Tiny ImageNet, and ImageNet-100 classification tasks. Most importantly, our model demonstrated significant convergence gains within just 30 epochs as opposed to the typical 1000 epochs required by most other self-supervised techniques.
引用
收藏
页码:4913 / 4925
页数:13
相关论文
共 50 条
  • [21] PSNSleep: a self-supervised learning method for sleep staging based on Siamese networks with only positive sample pairs
    You, Yuyang
    Chang, Shuohua
    Yang, Zhihong
    Sun, Qihang
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [22] Gated Self-supervised Learning for Improving Supervised Learning
    Fuadi, Erland Hillman
    Ruslim, Aristo Renaldo
    Wardhana, Putu Wahyu Kusuma
    Yudistira, Novanto
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 611 - 615
  • [23] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867
  • [24] Self-supervised learning model
    Saga, Kazushie
    Sugasaka, Tamami
    Sekiguchi, Minoru
    Fujitsu Scientific and Technical Journal, 1993, 29 (03): : 209 - 216
  • [25] Longitudinal self-supervised learning
    Zhao, Qingyu
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    MEDICAL IMAGE ANALYSIS, 2021, 71
  • [26] Credal Self-Supervised Learning
    Lienen, Julian
    Huellermeier, Eyke
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [27] Self-Supervised Learning for Recommendation
    Huang, Chao
    Xia, Lianghao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5136 - 5139
  • [28] Quantum self-supervised learning
    Jaderberg, B.
    Anderson, L. W.
    Xie, W.
    Albanie, S.
    Kiffner, M.
    Jaksch, D.
    QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):
  • [29] Self-Supervised Learning for Electroencephalography
    Rafiei, Mohammad H.
    Gauthier, Lynne V.
    Adeli, Hojjat
    Takabi, Daniel
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 1457 - 1471
  • [30] Self-supervised group meiosis contrastive learning for EEG-based emotion recognition
    Kan, Haoning
    Yu, Jiale
    Huang, Jiajin
    Liu, Zihe
    Wang, Heqian
    Zhou, Haiyan
    APPLIED INTELLIGENCE, 2023, 53 (22) : 27207 - 27225