MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-Supervised Pretraining and Transfer Learning

被引:4
|
作者
Zhang, Huaicheng [1 ]
Liu, Wenhan [1 ]
Shi, Jiguang [1 ]
Chang, Sheng [1 ]
Wang, Hao [1 ]
He, Jin [1 ]
Huang, Qijun [1 ]
机构
[1] Wuhan Univ, Sch Phys & Technol, Wuhan 430072, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Electrocardiography (ECG); mask autoencoder (MAE); pretraining; self-supervised learning; transfer learning;
D O I
10.1109/TIM.2022.3228267
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Electrocardiogram (ECG) is a universal diagnostic tool for heart disease, which can provide data for deep learning. The scarcity of labeled data is a major challenge for medical artificial intelligence diagnosis. Acquiring labeled medical data is time-consuming and high-cost because medical specialists are needed. As a kind of generative self-supervised learning method, a masked autoencoder (MAE) is capable to solve these problems. MAE family of ECG (MaeFE) is proposed in this article. Considering the temporal and spatial features of ECG, MaeFE contains three customized masking modes, including masked time autoencoder (MTAE), masked lead autoencoder (MLAE), and masked lead and time autoencoder (MLTAE). MTAE and MLAE pay greater attention to temporal features and spatial features, respectively. MLTAE is a multihead architecture that combines MTAE and MLAE. In the pretraining stage, ECG signals from the pretrain dataset are divided into patches and partially masked. The encoder transfers unmasked patches to tokens and the decoder reconstructs masked ones. In downstream tasks, the pretrained encoder is utilized as a classifier, which is arrhythmia classification performed in the downstream dataset. The process is the so-called transfer learning. MaeFE outperforms the state-of-the-art self-supervised learning methods, SimCLR, MoCo, CLOCS, and MaskUNet in downstream tasks. MTAE has the best comprehensive performance. Compared to contrastive learning models, MTAE achieves at least a 5.18%, 11.80%, and 3.23% increase in accuracy (Acc), Macro-F1, and area under the curve (AUC), respectively, using the linear probe. It also outperforms other models at 8.99% in Acc, 20.18% in Macro-F1, and 7.13% in AUC using fine-tuning. As another downstream task, experiments on the multilabel classification of arrhythmia are also conducted, which reflects the excellent generalization performance of MaeFE. Depending on experimental results, MaeFE turns out to be efficient and robust in downstream tasks. Overcoming the scarcity of labeled data, MaeFE is better than other self-supervised learning methods and achieves satisfying performance. Consequently, the algorithm in this article is on track of playing a major role in practical applications.
引用
下载
收藏
页数:15
相关论文
共 50 条
  • [41] SMG: self-supervised masked graph learning for cancer gene identification
    Cui, Yan
    Wang, Zhikang
    Wang, Xiaoyu
    Zhang, Yiwen
    Zhang, Ying
    Pan, Tong
    Zhang, Zhe
    Li, Shanshan
    Guo, Yuming
    Akutsu, Tatsuya
    Song, Jiangning
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (06)
  • [42] FactoFormer: Factorized Hyperspectral Transformers With Self-Supervised Pretraining
    Mohamed, Shaheer
    Haghighat, Maryam
    Fernando, Tharindu
    Sridharan, Sridha
    Fookes, Clinton
    Moghadam, Peyman
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 14
  • [43] Trajectory Prediction Method Enhanced by Self-supervised Pretraining
    Li, Linhui
    Fu, Yifan
    Wang, Ting
    Wang, Xuecheng
    Lian, Jing
    Qiche Gongcheng/Automotive Engineering, 2024, 46 (07): : 1219 - 1227
  • [44] Sustained Self-Supervised Pretraining for Temporal Order Verification
    Buckchash, Himanshu
    Raman, Balasubramanian
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2019, PT I, 2019, 11941 : 140 - 149
  • [45] Self-Supervised Pretraining Transformer for Seismic Data Denoising
    Wang, Hongzhou
    Lin, Jun
    Li, Yue
    Dong, Xintong
    Tong, Xunqian
    Lu, Shaoping
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 : 1 - 25
  • [46] Self-Supervised Monocular Depth Estimation With Extensive Pretraining
    Choi, Hyukdoo
    IEEE ACCESS, 2021, 9 : 157236 - 157246
  • [47] Self-Supervised Monocular Depth Estimation with Extensive Pretraining
    Choi, Hyukdoo
    IEEE Access, 2021, 9 : 157236 - 157246
  • [48] Self-Supervised Learning in Medical Imaging: Anomaly Detection in MRI Using Autoencoders
    Wismueller, Axel
    Vosoughi, M. Ali
    REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2022, 2022, 12102
  • [49] Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning
    Wang, Rui
    Chen, Dongdong
    Wu, Zuxuan
    Chen, Yinpeng
    Dai, Xiyang
    Liu, Mengchen
    Yuan, Lu
    Jiang, Yu-Gang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6312 - 6322
  • [50] Self-supervised pretraining via contrast learning for intelligent incipient fault detection of bearings
    Ding, Yifei
    Zhuang, Jichao
    Ding, Peng
    Jia, Minping
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2022, 218