TimesURL: Self-Supervised Contrastive Learning for Universal Time Series Representation Learning

被引:0
|
作者
Liu, Jiexi [1 ,2 ]
Chen, Songcan [1 ,2 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing, Peoples R China
[2] MIIT Key Lab Pattern Anal & Machine Intelligence, Nanjing, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning universal time series representations applicable to various types of downstream tasks is challenging but valuable in real applications. Recently, researchers have attempted to leverage the success of self-supervised contrastive learning (SSCL) in Computer Vision(CV) and Natural Language Processing(NLP) to tackle time series representation. Nevertheless, due to the special temporal characteristics, relying solely on empirical guidance from other domains may be ineffective for time series and difficult to adapt to multiple downstream tasks. To this end, we review three parts involved in SSCL including 1) designing augmentation methods for positive pairs, 2) constructing (hard) negative pairs, and 3) designing SSCL loss. For 1) and 2), we find that unsuitable positive and negative pair construction may introduce inappropriate inductive biases, which neither preserve temporal properties nor provide sufficient discriminative features. For 3), just exploring segment- or instance-level semantics information is not enough for learning universal representation. To remedy the above issues, we propose a novel self-supervised framework named TimesURL. Specifically, we first introduce a frequency-temporal-based augmentation to keep the temporal property unchanged. And then, we construct double Universums as a special kind of hard negative to guide better contrastive learning. Additionally, we introduce time reconstruction as a joint optimization objective with contrastive learning to capture both segment-level and instance-level information. As a result, TimesURL can learn high-quality universal representations and achieve state-of-the-art performance in 6 different downstream tasks, including short- and long-term forecasting, imputation, classification, anomaly detection and transfer learning.
引用
收藏
页码:13918 / 13926
页数:9
相关论文
共 50 条
  • [1] Mixing up contrastive learning: Self-supervised representation learning for time series
    Wickstrom, Kristoffer
    Kampffmeyer, Michael
    Mikalsen, Karl Oyvind
    Jenssen, Robert
    [J]. PATTERN RECOGNITION LETTERS, 2022, 155 : 54 - 61
  • [2] TimeCLR: A self-supervised contrastive learning framework for univariate time series representation
    Yang, Xinyu
    Zhang, Zhenguo
    Cui, Rongyi
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 245
  • [3] CARLA: Self-supervised contrastive representation learning for time series anomaly detection
    Darban, Zahra Zamanzadeh
    Webb, Geoffrey I.
    Pan, Shirui
    Aggarwal, Charu C.
    Salehi, Mahsa
    [J]. PATTERN RECOGNITION, 2025, 157
  • [4] Self-Supervised Contrastive Representation Learning for Semi-Supervised Time-Series Classification
    Eldele, Emadeldeen
    Ragab, Mohamed
    Chen, Zhenghua
    Wu, Min
    Kwoh, Chee-Keong
    Li, Xiaoli
    Guan, Cuntai
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) : 15604 - 15618
  • [5] UniTS: A Universal Time Series Analysis Framework Powered by Self-Supervised Representation Learning
    Liang, Zhiyu
    Liang, Chen
    Liang, Zheng
    Wang, Hongzhi
    Zheng, Bo
    [J]. COMPANION OF THE 2024 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, SIGMOD-COMPANION 2024, 2024, : 480 - 483
  • [6] Contrasting Contrastive Self-Supervised Representation Learning Pipelines
    Kotar, Klemen
    Ilharco, Gabriel
    Schmidt, Ludwig
    Ehsani, Kiana
    Mottaghi, Roozbeh
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9929 - 9939
  • [7] Self-supervised contrastive representation learning for semantic segmentation
    Liu, Bochong
    Cai, Huaiyu
    Wang, Yi
    Chen, Xiaodong
    [J]. Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2024, 51 (01): : 125 - 134
  • [8] CONTRASTIVE SEPARATIVE CODING FOR SELF-SUPERVISED REPRESENTATION LEARNING
    Wang, Jun
    Lam, Max W. Y.
    Su, Dan
    Yu, Dong
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3865 - 3869
  • [9] Grouped Contrastive Learning of Self-Supervised Sentence Representation
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Peng, Dezhong
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [10] Self-Supervised Contrastive Learning for Medical Time Series: A Systematic Review
    Liu, Ziyu
    Alavi, Azadeh
    Li, Minyi
    Zhang, Xiang
    [J]. SENSORS, 2023, 23 (09)