MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-Supervised Pretraining and Transfer Learning

被引:4
|
作者
Zhang, Huaicheng [1 ]
Liu, Wenhan [1 ]
Shi, Jiguang [1 ]
Chang, Sheng [1 ]
Wang, Hao [1 ]
He, Jin [1 ]
Huang, Qijun [1 ]
机构
[1] Wuhan Univ, Sch Phys & Technol, Wuhan 430072, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Electrocardiography (ECG); mask autoencoder (MAE); pretraining; self-supervised learning; transfer learning;
D O I
10.1109/TIM.2022.3228267
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Electrocardiogram (ECG) is a universal diagnostic tool for heart disease, which can provide data for deep learning. The scarcity of labeled data is a major challenge for medical artificial intelligence diagnosis. Acquiring labeled medical data is time-consuming and high-cost because medical specialists are needed. As a kind of generative self-supervised learning method, a masked autoencoder (MAE) is capable to solve these problems. MAE family of ECG (MaeFE) is proposed in this article. Considering the temporal and spatial features of ECG, MaeFE contains three customized masking modes, including masked time autoencoder (MTAE), masked lead autoencoder (MLAE), and masked lead and time autoencoder (MLTAE). MTAE and MLAE pay greater attention to temporal features and spatial features, respectively. MLTAE is a multihead architecture that combines MTAE and MLAE. In the pretraining stage, ECG signals from the pretrain dataset are divided into patches and partially masked. The encoder transfers unmasked patches to tokens and the decoder reconstructs masked ones. In downstream tasks, the pretrained encoder is utilized as a classifier, which is arrhythmia classification performed in the downstream dataset. The process is the so-called transfer learning. MaeFE outperforms the state-of-the-art self-supervised learning methods, SimCLR, MoCo, CLOCS, and MaskUNet in downstream tasks. MTAE has the best comprehensive performance. Compared to contrastive learning models, MTAE achieves at least a 5.18%, 11.80%, and 3.23% increase in accuracy (Acc), Macro-F1, and area under the curve (AUC), respectively, using the linear probe. It also outperforms other models at 8.99% in Acc, 20.18% in Macro-F1, and 7.13% in AUC using fine-tuning. As another downstream task, experiments on the multilabel classification of arrhythmia are also conducted, which reflects the excellent generalization performance of MaeFE. Depending on experimental results, MaeFE turns out to be efficient and robust in downstream tasks. Overcoming the scarcity of labeled data, MaeFE is better than other self-supervised learning methods and achieves satisfying performance. Consequently, the algorithm in this article is on track of playing a major role in practical applications.
引用
下载
收藏
页数:15
相关论文
共 50 条
  • [1] MaeFE: Masked Autoencoders Family of Electrocardiogram for Self-Supervised Pretraining and Transfer Learning
    Zhang, Huaicheng
    Liu, Wenhan
    Shi, Jiguang
    Chang, Sheng
    Wang, Hao
    He, Jin
    Huang, Qijun
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [2] Self-Supervised Pretraining Vision Transformer With Masked Autoencoders for Building Subsurface Model
    Li, Yuanyuan
    Alkhalifah, Tariq
    Huang, Jianping
    Li, Zhenchun
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [3] Self-Supervised Pretraining Vision Transformer With Masked Autoencoders for Building Subsurface Model
    Li, Yuanyuan
    Alkhalifah, Tariq
    Huang, Jianping
    Li, Zhenchun
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] Masked Autoencoders for Point Cloud Self-supervised Learning
    Pang, Yatian
    Wang, Wenxiao
    Tay, Francis E. H.
    Liu, Wei
    Tian, Yonghong
    Yuan, Li
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 604 - 621
  • [5] rPPG-MAE: Self-Supervised Pretraining With Masked Autoencoders for Remote Physiological Measurements
    Liu, Xin
    Zhang, Yuting
    Yu, Zitong
    Lu, Hao
    Yue, Huanjing
    Yang, Jingyu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7278 - 7293
  • [6] GraphMAE: Self-Supervised Masked Graph Autoencoders
    Hou, Zhenyu
    Liu, Xiao
    Cen, Yukuo
    Dong, Yuxiao
    Yang, Hongxia
    Wang, Chunjie
    Tang, Jie
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 594 - 604
  • [7] Self-Supervised Pretraining Improves Self-Supervised Pretraining
    Reed, Colorado J.
    Yue, Xiangyu
    Nrusimha, Ani
    Ebrahimi, Sayna
    Vijaykumar, Vivek
    Mao, Richard
    Li, Bo
    Zhang, Shanghang
    Guillory, Devin
    Metzger, Sean
    Keutzer, Kurt
    Darrell, Trevor
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1050 - 1060
  • [8] A Masked Self-Supervised Pretraining Method for Face Parsing
    Li, Zhuang
    Cao, Leilei
    Wang, Hongbin
    Xu, Lihong
    MATHEMATICS, 2022, 10 (12)
  • [9] A Self-Supervised Learning Approach to Road Anomaly Detection Using Masked Autoencoders
    Dutta, Proma
    Podder, Kanchon Kanti
    Zhang, Jian
    Hecht, Christian
    Swarna, Surya
    Bhavsar, Parth
    INTERNATIONAL CONFERENCE ON TRANSPORTATION AND DEVELOPMENT 2024: PAVEMENTS AND INFRASTRUCTURE SYSTEMS, ICTD 2024, 2024, : 536 - 547
  • [10] Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning
    Sun, Jinghan
    Wei, Dong
    Ma, Kai
    Wang, Liansheng
    Zheng, Yefeng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2307 - 2315