Interpretable disease prediction using heterogeneous patient records with self-attentive fusion encoder

被引:3
|
作者
Kwak, Heeyoung [1 ]
Chang, Jooyoung [2 ]
Choe, Byeongjin [1 ]
Park, Sangmin [2 ,3 ]
Jung, Kyomin [1 ]
机构
[1] Seoul Natl Univ, Dept Elect Engn, 1 Gwanak Ro, Seoul 08826, South Korea
[2] Seoul Natl Univ, Dept Biomed Sci, Seoul, South Korea
[3] Seoul Natl Univ Hosp, Dept Family Med, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
disease prediction; cardiovascular disease; deep learning; recurrent neural network; attention;
D O I
10.1093/jamia/ocab109
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Objective: We propose an interpretable disease prediction model that efficiently fuses multiple types of patient records using a self-attentive fusion encoder. We assessed the model performance in predicting cardiovascular disease events, given the records of a general patient population. Materials and Methods: We extracted 7981 1 1 ses and 67 623 controls from the sample cohort database and nationwide healthcare claims data of South Korea. Among the information provided, our model used the sequential records of medical codes and patient characteristics, such as demographic profiles and the most recent health examination results. These two types of patient records were combined in our self-attentive fusion module, whereas previously dominant methods aggregated them using a simple concatenation. The prediction performance was compared to state-of-the-art recurrent neural network-based approaches and other widely used machine learning approaches. Results: Our model outperformed all the other compared methods in predicting cardiovascular disease events. It achieved an area under the curve of 0.839, while the other compared methods achieved between 0.74111 d 0.830. Moreover, our model consistently outperformed the other methods in a more challenging setting in which we tested the model's ability to draw an inference from more nonobvious, diverse factors. Discussion: We also interpreted the attention weights provided by our model as the relative importance of each time step in the sequence. We showed that our model reveals the informative parts of the patients' history by measuring the attention weights. Conclusion: We suggest an interpretable disease prediction model that efficiently fuses heterogeneous patient records and demonstrates superior disease prediction performance.
引用
收藏
页码:2155 / 2164
页数:10
相关论文
共 50 条
  • [21] Contextual Self-attentive Temporal Point Process for Physical Decommissioning Prediction of Cloud Assets
    Yang, Fangkai
    Zhang, Jue
    Wang, Lu
    Qiao, Bo
    Weng, Di
    Qin, Xiaoting
    Weber, Gregory
    Das, Durgesh Nandini
    Rakhunathan, Srinivasan
    Srikanth, Ranganathan
    Lin, Qingwei
    Zhang, Dongmei
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5372 - 5381
  • [22] Sparse Self-Attentive Transformer With Multiscale Feature Fusion on Long-Term SOH Forecasting
    Zhu, Xinshan
    Xu, Chengqian
    Song, Tianbao
    Huang, Zhen
    Zhang, Yun
    IEEE TRANSACTIONS ON POWER ELECTRONICS, 2024, 39 (08) : 10399 - 10408
  • [23] A Deformable and Multi-Scale Network with Self-Attentive Feature Fusion for SAR Ship Classification
    Chen, Peng
    Zhou, Hui
    Li, Ying
    Liu, Bingxin
    Liu, Peng
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2024, 12 (09)
  • [24] Analysis of Sentiment on Movie Reviews Using Word Embedding Self-Attentive LSTM
    Sivakumar, Soubraylu
    Rajalakshmi, Ratnavel
    INTERNATIONAL JOURNAL OF AMBIENT COMPUTING AND INTELLIGENCE, 2021, 12 (02) : 33 - 52
  • [25] SPEAKER DIARISATION USING 2D SELF-ATTENTIVE COMBINATION OF EMBEDDINGS
    Sun, G.
    Zhang, C.
    Woodland, P. C.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 5801 - 5805
  • [26] ISA-GAN: inception-based self-attentive encoder-decoder network for face synthesis using delineated facial images
    Yadav, Nand Kumar
    Singh, Satish Kumar
    Dubey, Shiv Ram
    VISUAL COMPUTER, 2024, 40 (11): : 8205 - 8225
  • [27] Exploiting Intra and Inter-field Feature Interaction with Self-Attentive Network for CTR Prediction
    Zheng, Shenghao
    Xian, Xuefeng
    Hao, Yongjing
    Sheng, Victor S.
    Cui, Zhiming
    Zhao, Pengpeng
    WEB INFORMATION SYSTEMS ENGINEERING - WISE 2021, PT II, 2021, 13081 : 34 - 49
  • [28] Multivariate Sleep Stage Classification using Hybrid Self-Attentive Deep Learning Networks
    Yuan, Ye
    Jia, Kebin
    Ma, Fenglong
    Xun, Guangxu
    Wang, Yaqing
    Su, Lu
    Zhang, Aidong
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 963 - 968
  • [29] Speech Enhancement Using Multi-Stage Self-Attentive Temporal Convolutional Networks
    Lin, Ju
    van Wijngaarden, Adriaan J. de Lind
    Wang, Kuang-Ching
    Smith, Melissa C.
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3440 - 3450
  • [30] HarmoSATE: Harmonized embedding-based self-attentive encoder to improve accuracy of privacy-preserving federated predictive analysis
    Lee, Taek-Ho
    Kim, Suhyeon
    Lee, Junghye
    Jun, Chi-Hyuck
    INFORMATION SCIENCES, 2024, 662