Deep Contrastive Representation Learning With Self-Distillation

被引:64
|
作者
Xiao, Zhiwen [1 ,2 ,3 ]
Xing, Huanlai [1 ,2 ,3 ]
Zhao, Bowen [1 ,2 ,3 ]
Qu, Rong [4 ]
Luo, Shouxi [1 ,2 ,3 ]
Dai, Penglin [1 ,2 ,3 ]
Li, Ke [1 ,2 ,3 ]
Zhu, Zonghai [1 ,2 ,3 ]
机构
[1] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Chengdu 610031, Peoples R China
[2] Southwest Jiaotong Univ, Tangshan Inst, Tangshan 063000, Peoples R China
[3] Minist Educ, Engn Res Ctr Sustainable Urban Intelligent Transpo, Beijing, Peoples R China
[4] Univ Nottingham, Sch Comp Sci, Nottingham NG7 2RD, England
基金
中国国家自然科学基金;
关键词
Contrastive learning; knowledge distillation; representation learning; time series classification; time series clustering; SERIES CLASSIFICATION;
D O I
10.1109/TETCI.2023.3304948
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, contrastive learning (CL) is a promising way of learning discriminative representations from time series data. In the representation hierarchy, semantic information extracted from lower levels is the basis of that captured from higher levels. Low-level semantic information is essential and should be considered in the CL process. However, the existing CL algorithms mainly focus on the similarity of high-level semantic information. Considering the similarity of low-level semantic information may improve the performance of CL. To this end, we present a deep contrastive representation learning with self-distillation (DCRLS) for the time series domain. DCRLS gracefully combine data augmentation, deep contrastive learning, and self distillation. Our data augmentation provides different views from the same sample as the input of DCRLS. Unlike most CL algorithms that concentrate on high-level semantic information only, our deep contrastive learning also considers the contrast similarity of low-level semantic information between peer residual blocks. Our self distillation promotes knowledge flow from high-level to low-level blocks to help regularize DCRLS in the knowledge transfer process. The experimental results demonstrate that the DCRLS-based structures achieve excellent performance on classification and clustering on 36 UCR2018 datasets.
引用
收藏
页码:3 / 15
页数:13
相关论文
共 50 条
  • [41] Self-Distillation Feature Learning Network for Optical and SAR Image Registration
    Quan, Dou
    Wei, Huiyuan
    Wang, Shuang
    Lei, Ruiqi
    Duan, Baorui
    Li, Yi
    Hou, Biao
    Jiao, Licheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [42] Bayesian Optimization Meets Self-Distillation
    Lee, HyunJae
    Song, Heon
    Lee, Hyeonsoo
    Lee, Gi-hyeon
    Park, Suyeong
    Yoo, Donggeun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1696 - 1705
  • [43] Restructuring the Teacher and Student in Self-Distillation
    Zheng, Yujie
    Wang, Chong
    Tao, Chenchen
    Lin, Sunqi
    Qian, Jiangbo
    Wu, Jiafei
    IEEE Transactions on Image Processing, 2024, 33 : 5551 - 5563
  • [44] Tolerant Self-Distillation for image classification
    Liu, Mushui
    Yu, Yunlong
    Ji, Zhong
    Han, Jungong
    Zhang, Zhongfei
    NEURAL NETWORKS, 2024, 174
  • [45] Semantic Super-Resolution via Self-Distillation and Adversarial Learning
    Park, Hanhoon
    IEEE ACCESS, 2024, 12 : 2361 - 2370
  • [46] Self-distillation for Surgical Action Recognition
    Yamlahi, Amine
    Thuy Nuong Tran
    Godau, Patrick
    Schellenberg, Melanie
    Michael, Dominik
    Smidt, Finn-Henri
    Noelke, Jan-Hinrich
    Adler, Tim J.
    Tizabi, Minu Dietlinde
    Nwoye, Chinedu Innocent
    Padoy, Nicolas
    Maier-Hein, Lena
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT IX, 2023, 14228 : 637 - 646
  • [47] Future Augmentation with Self-distillation in Recommendation
    Liu, Chong
    Xie, Ruobing
    Liu, Xiaoyang
    Wang, Pinzheng
    Zheng, Rongqin
    Zhang, Lixin
    Li, Juntao
    Xia, Feng
    Lin, Leyu
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 602 - 618
  • [48] Self-Distillation With Augmentation in Feature Space
    Xu, Kai
    Wang, Lichun
    Li, Shuang
    Xin, Jianjia
    Yin, Baocai
    IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (10) : 9578 - 9590
  • [49] Image classification based on self-distillation
    Yuting Li
    Linbo Qing
    Xiaohai He
    Honggang Chen
    Qiang Liu
    Applied Intelligence, 2023, 53 : 9396 - 9408
  • [50] Unsupervised Representation Learning Meets Pseudo-Label Supervised Self-Distillation: A New Approach to Rare Disease Classification
    Sun, Jinghan
    Wei, Dong
    Ma, Kai
    Wang, Liansheng
    Zheng, Yefeng
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT V, 2021, 12905 : 519 - 529