Universal representation learning for multivariate time series using the instance-level and cluster-level supervised contrastive learning

被引:0
|
作者
Moradinasab, Nazanin [1 ]
Sharma, Suchetha [2 ]
Bar-Yoseph, Ronen [3 ,4 ]
Radom-Aizik, Shlomit [3 ]
Bilchick, Kenneth C. [5 ]
Cooper, Dan M. [3 ,6 ]
Weltman, Arthur [7 ,8 ]
Brown, Donald E. [1 ,2 ]
机构
[1] Univ Virginia, Dept Engn Syst & Environm, Charlottesville, VA 22904 USA
[2] Univ Virginia, Sch Data Sci, Charlottesville, VA 22904 USA
[3] Univ Calif Irvine, Pediat Exercise & Genom Res Ctr, Irvine, CA 92697 USA
[4] Ruth Rappaport Childrens Hosp, Pediat Pulm Inst, Rambam Hlth Care Campus, IL-3109601 Haifa, Israel
[5] Univ Virginia Hlth Syst, Dept Med, Cardiovasc Div, Charlottesville, VA 22903 USA
[6] Univ Calif Irvine, Inst Clin & Translat Sci, Irvine, CA 92697 USA
[7] Univ Virginia, Dept Kinesiol, Charlottesville, VA 22903 USA
[8] Univ Virginia, Dept Med, Div Endocrinol & Metab, Charlottesville, VA 22903 USA
基金
美国国家卫生研究院;
关键词
Multivariate time series data; Contrastive learning; Classification; Interpretability;
D O I
10.1007/s10618-024-01006-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The multivariate time series classification (MTSC) task aims to predict a class label for a given time series. Recently, modern deep learning-based approaches have achieved promising performance over traditional methods for MTSC tasks. The success of these approaches relies on access to the massive amount of labeled data (i.e., annotating or assigning tags to each sample that shows its corresponding category). However, obtaining a massive amount of labeled data is usually very time-consuming and expensive in many real-world applications such as medicine, because it requires domain experts' knowledge to annotate data. Insufficient labeled data prevents these models from learning discriminative features, resulting in poor margins that reduce generalization performance. To address this challenge, we propose a novel approach: supervised contrastive learning for time series classification (SupCon-TSC). This approach improves the classification performance by learning the discriminative low-dimensional representations of multivariate time series, and its end-to-end structure allows for interpretable outcomes. It is based on supervised contrastive (SupCon) loss to learn the inherent structure of multivariate time series. First, two separate augmentation families, including strong and weak augmentation methods, are utilized to generate augmented data for the source and target networks, respectively. Second, we propose the instance-level, and cluster-level SupCon learning approaches to capture contextual information to learn the discriminative and universal representation for multivariate time series datasets. In the instance-level SupCon learning approach, for each given anchor instance that comes from the source network, the low-variance output encodings from the target network are sampled as positive and negative instances based on their labels. However, the cluster-level approach is performed between each instance and cluster centers among batches, as opposed to the instance-level approach. The cluster-level SupCon loss attempts to maximize the similarities between each instance and cluster centers among batches. We tested this novel approach on two small cardiopulmonary exercise testing (CPET) datasets and the real-world UEA Multivariate time series archive. The results of the SupCon-TSC model on CPET datasets indicate its capability to learn more discriminative features than existing approaches in situations where the size of the dataset is small. Moreover, the results on the UEA archive show that training a classifier on top of the universal representation features learned by our proposed method outperforms the state-of-the-art approaches.
引用
收藏
页码:1493 / 1519
页数:27
相关论文
共 50 条
  • [1] Universal representation learning for multivariate time series using the instance-level and cluster-level supervised contrastive learning
    Nazanin Moradinasab
    Suchetha Sharma
    Ronen Bar-Yoseph
    Shlomit Radom-Aizik
    Kenneth C. Bilchick
    Dan M. Cooper
    Arthur Weltman
    Donald E. Brown
    [J]. Data Mining and Knowledge Discovery, 2024, 38 : 1493 - 1519
  • [2] Instance-Level Contrastive Learning for Weakly Supervised Object Detection
    Zhang, Ming
    Zeng, Bing
    [J]. SENSORS, 2022, 22 (19)
  • [3] Adversarial Cluster-Level and Global-Level Graph Contrastive Learning for node representation
    Tang, Qian
    Zhao, Yiji
    Wu, Hao
    Zhang, Lei
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 279
  • [4] Contrastive visual clustering for improving instance-level contrastive learning as a plugin
    Liu, Yue
    Zan, Xiangzhen
    Li, Xianbin
    Liu, Wenbin
    Fang, Gang
    [J]. PATTERN RECOGNITION, 2024, 154
  • [5] Cluster-Level Contrastive Learning for Emotion Recognition in Conversations
    Yang, Kailai
    Zhang, Tianlin
    Alhuzali, Hassan
    Ananiadou, Sophia
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 3269 - 3280
  • [6] Instance-level and Class-level Contrastive Incremental Learning for Image Classification
    Han, Jia-yi
    Liu, Jian-wei
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [7] TimesURL: Self-Supervised Contrastive Learning for Universal Time Series Representation Learning
    Liu, Jiexi
    Chen, Songcan
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12, 2024, : 13918 - 13926
  • [8] Online MIL tracking with instance-level semi-supervised learning
    Chen, Si
    Li, Shaozi
    Su, Songzhi
    Tian, Qi
    Ji, Rongrong
    [J]. NEUROCOMPUTING, 2014, 139 : 272 - 288
  • [9] Contrastive learning enhanced by graph neural networks for Universal Multivariate Time Series Representation
    Wang, Xinghao
    Xing, Qiang
    Xiao, Huimin
    Ye, Ming
    [J]. INFORMATION SYSTEMS, 2024, 125
  • [10] Active Learning of Instance-level Constraints for Semi-supervised Document Clustering
    Zhao, Weizhong
    He, Qing
    Ma, Huifang
    Shi, Zhongzhi
    [J]. 2009 IEEE/WIC/ACM INTERNATIONAL JOINT CONFERENCES ON WEB INTELLIGENCE (WI) AND INTELLIGENT AGENT TECHNOLOGIES (IAT), VOL 1, 2009, : 264 - 268