Deep Contrastive Representation Learning With Self-Distillation

被引:64
|
作者
Xiao, Zhiwen [1 ,2 ,3 ]
Xing, Huanlai [1 ,2 ,3 ]
Zhao, Bowen [1 ,2 ,3 ]
Qu, Rong [4 ]
Luo, Shouxi [1 ,2 ,3 ]
Dai, Penglin [1 ,2 ,3 ]
Li, Ke [1 ,2 ,3 ]
Zhu, Zonghai [1 ,2 ,3 ]
机构
[1] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 610031, Peoples R China
[2] Southwest Jiaotong Univ, Tangshan Inst, Tangshan 063000, Peoples R China
[3] Minist Educ, Engn Res Ctr Sustainable Urban Intelligent Transpo, Beijing, Peoples R China
[4] Univ Nottingham, Sch Comp Sci, Nottingham NG7 2RD, England
基金
中国国家自然科学基金;
关键词
Contrastive learning; knowledge distillation; representation learning; time series classification; time series clustering; SERIES CLASSIFICATION;
D O I
10.1109/TETCI.2023.3304948
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, contrastive learning (CL) is a promising way of learning discriminative representations from time series data. In the representation hierarchy, semantic information extracted from lower levels is the basis of that captured from higher levels. Low-level semantic information is essential and should be considered in the CL process. However, the existing CL algorithms mainly focus on the similarity of high-level semantic information. Considering the similarity of low-level semantic information may improve the performance of CL. To this end, we present a deep contrastive representation learning with self-distillation (DCRLS) for the time series domain. DCRLS gracefully combine data augmentation, deep contrastive learning, and self distillation. Our data augmentation provides different views from the same sample as the input of DCRLS. Unlike most CL algorithms that concentrate on high-level semantic information only, our deep contrastive learning also considers the contrast similarity of low-level semantic information between peer residual blocks. Our self distillation promotes knowledge flow from high-level to low-level blocks to help regularize DCRLS in the knowledge transfer process. The experimental results demonstrate that the DCRLS-based structures achieve excellent performance on classification and clustering on 36 UCR2018 datasets.
引用
收藏
页码:3 / 15
页数:13
相关论文
共 50 条
  • [31] Probabilistic online self-distillation
    Tzelepi, Maria
    Passalis, Nikolaos
    Tefas, Anastasios
    NEUROCOMPUTING, 2022, 493 : 592 - 604
  • [32] Class Incremental Learning With Deep Contrastive Learning and Attention Distillation
    Zhu, Jitao
    Luo, Guibo
    Duan, Baishan
    Zhu, Yuesheng
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 1224 - 1228
  • [33] Eliminating Primacy Bias in Online Reinforcement Learning by Self-Distillation
    Li, Jingchen
    Shi, Haobin
    Wu, Huarui
    Zhao, Chunjiang
    Hwang, Kao-Shing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 13
  • [34] Iterative Graph Self-Distillation
    Zhang, Hanlin
    Lin, Shuai
    Liu, Weiyang
    Zhou, Pan
    Tang, Jian
    Liang, Xiaodan
    Xing, Eric P.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (03) : 1161 - 1169
  • [35] Self-distillation improves self-supervised learning for DNA sequence inference
    Yu, Tong
    Cheng, Lei
    Khalitov, Ruslan
    Olsson, Erland B.
    Yang, Zhirong
    Neural Networks, 2025, 183
  • [36] Data-Distortion Guided Self-Distillation for Deep Neural Networks
    Xu, Ting-Bing
    Liu, Cheng-Lin
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 5565 - 5572
  • [37] A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer
    Dong, Chi
    Wu, Yujiao
    Sun, Bo
    Bo, Jiayi
    Huang, Yufei
    Geng, Yikang
    Zhang, Qianhui
    Liu, Ruixiang
    Guo, Wei
    Wang, Xingling
    Jiang, Xiran
    Computerized Medical Imaging and Graphics, 2025, 119
  • [38] Modality-Aware Contrastive Instance Learning with Self-Distillation for Weakly-Supervised Audio-Visual Violence Detection
    Yu, Jiashuo
    Liu, Jinyu
    Cheng, Ying
    Feng, Rui
    Zhang, Yuejie
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6278 - 6287
  • [39] Wasserstein Contrastive Representation Distillation
    Chen, Liqun
    Wang, Dong
    Gan, Zhe
    Liu, Jingjing
    Henao, Ricardo
    Carin, Lawrence
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16291 - 16300