Deep Contrastive Representation Learning With Self-Distillation

被引:64
|
作者
Xiao, Zhiwen [1 ,2 ,3 ]
Xing, Huanlai [1 ,2 ,3 ]
Zhao, Bowen [1 ,2 ,3 ]
Qu, Rong [4 ]
Luo, Shouxi [1 ,2 ,3 ]
Dai, Penglin [1 ,2 ,3 ]
Li, Ke [1 ,2 ,3 ]
Zhu, Zonghai [1 ,2 ,3 ]
机构
[1] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 610031, Peoples R China
[2] Southwest Jiaotong Univ, Tangshan Inst, Tangshan 063000, Peoples R China
[3] Minist Educ, Engn Res Ctr Sustainable Urban Intelligent Transpo, Beijing, Peoples R China
[4] Univ Nottingham, Sch Comp Sci, Nottingham NG7 2RD, England
基金
中国国家自然科学基金;
关键词
Contrastive learning; knowledge distillation; representation learning; time series classification; time series clustering; SERIES CLASSIFICATION;
D O I
10.1109/TETCI.2023.3304948
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, contrastive learning (CL) is a promising way of learning discriminative representations from time series data. In the representation hierarchy, semantic information extracted from lower levels is the basis of that captured from higher levels. Low-level semantic information is essential and should be considered in the CL process. However, the existing CL algorithms mainly focus on the similarity of high-level semantic information. Considering the similarity of low-level semantic information may improve the performance of CL. To this end, we present a deep contrastive representation learning with self-distillation (DCRLS) for the time series domain. DCRLS gracefully combine data augmentation, deep contrastive learning, and self distillation. Our data augmentation provides different views from the same sample as the input of DCRLS. Unlike most CL algorithms that concentrate on high-level semantic information only, our deep contrastive learning also considers the contrast similarity of low-level semantic information between peer residual blocks. Our self distillation promotes knowledge flow from high-level to low-level blocks to help regularize DCRLS in the knowledge transfer process. The experimental results demonstrate that the DCRLS-based structures achieve excellent performance on classification and clustering on 36 UCR2018 datasets.
引用
收藏
页码:3 / 15
页数:13
相关论文
共 50 条
  • [21] Self-distillation object segmentation via pyramid knowledge representation and transfer
    Zheng, Yunfei
    Sun, Meng
    Wang, Xiaobing
    Cao, Tieyong
    Zhang, Xiongwei
    Xing, Lixing
    Fang, Zheng
    MULTIMEDIA SYSTEMS, 2023, 29 (05) : 2615 - 2631
  • [22] Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
    Wang, Yanbo
    Lin, Shaohui
    Qu, Yanyun
    Wu, Haiyan
    Zhang, Zhizhong
    Xie, Yuan
    Yao, Angela
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1122 - 1128
  • [23] Learning from Better Supervision: Self-distillation for Learning with Noisy Labels
    Baek, Kyungjune
    Lee, Seungho
    Shim, Hyunjung
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1829 - 1835
  • [24] Retinal vessel segmentation based on self-distillation and implicit neural representation
    Jia Gu
    Fangzheng Tian
    Il-Seok Oh
    Applied Intelligence, 2023, 53 : 15027 - 15044
  • [25] Self-distillation object segmentation via pyramid knowledge representation and transfer
    Yunfei Zheng
    Meng Sun
    Xiaobing Wang
    Tieyong Cao
    Xiongwei Zhang
    Lixing Xing
    Zheng Fang
    Multimedia Systems, 2023, 29 : 2615 - 2631
  • [26] Adjustable super-resolution network via deep supervised learning and progressive self-distillation
    Li, Juncheng
    Fang, Faming
    Zeng, Tieyong
    Zhang, Guixu
    Wang, Xizhao
    NEUROCOMPUTING, 2022, 500 : 379 - 393
  • [27] Retinal vessel segmentation based on self-distillation and implicit neural representation
    Gu, Jia
    Tian, Fangzheng
    Oh, Il-Seok
    APPLIED INTELLIGENCE, 2023, 53 (12) : 15027 - 15044
  • [28] LaST: Label-Free Self-Distillation Contrastive Learning With Transformer Architecture for Remote Sensing Image Scene Classification
    Wang, Xuying
    Zhu, Jiawei
    Yan, Zhengliang
    Zhang, Zhaoyang
    Zhang, Yunsheng
    Chen, Yansheng
    Li, Haifeng
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [29] DeSD: Self-Supervised Learning with Deep Self-Distillation for 3D Medical Image Segmentation
    Ye, Yiwen
    Zhang, Jianpeng
    Chen, Ziyang
    Xia, Yong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT IV, 2022, 13434 : 545 - 555
  • [30] Enhancing learning on uncertain pixels in self-distillation for object segmentation
    Chen, Lei
    Cao, Tieyong
    Zheng, Yunfei
    Wang, Yang
    Zhang, Bo
    Yang, Jibin
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (05) : 6545 - 6557