Transformer-Based Multi-Modal Data Fusion Method for COPD Classification and Physiological and Biochemical Indicators Identification

被引:4
|
作者
Xie, Weidong [1 ]
Fang, Yushan [1 ]
Yang, Guicheng [1 ]
Yu, Kun [2 ]
Li, Wei [1 ,3 ]
机构
[1] Northeastern Univ, Sch Comp Sci & Engn, Shenyang 110169, Peoples R China
[2] Northeastern Univ, Coll Med & Bioinformat Engn, Shenyang 110169, Peoples R China
[3] Key Lab Intelligent Comp Med Image MIIC, Shenyang 110169, Peoples R China
关键词
multi-modal fusion; cross-modal transformer; low-rank multi-modal fusion; COPD; PREDICTION; PROGNOSIS;
D O I
10.3390/biom13091391
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
As the number of modalities in biomedical data continues to increase, the significance of multi-modal data becomes evident in capturing complex relationships between biological processes, thereby complementing disease classification. However, the current multi-modal fusion methods for biomedical data require more effective exploitation of intra- and inter-modal interactions, and the application of powerful fusion methods to biomedical data is relatively rare. In this paper, we propose a novel multi-modal data fusion method that addresses these limitations. Our proposed method utilizes a graph neural network and a 3D convolutional network to identify intra-modal relationships. By doing so, we can extract meaningful features from each modality, preserving crucial information. To fuse information from different modalities, we employ the Low-rank Multi-modal Fusion method, which effectively integrates multiple modalities while reducing noise and redundancy. Additionally, our method incorporates the Cross-modal Transformer to automatically learn relationships between different modalities, facilitating enhanced information exchange and representation. We validate the effectiveness of our proposed method using lung CT imaging data and physiological and biochemical data obtained from patients diagnosed with Chronic Obstructive Pulmonary Disease (COPD). Our method demonstrates superior performance compared to various fusion methods and their variants in terms of disease classification accuracy.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Colour image cross-modal retrieval method based on multi-modal visual data fusion
    Liu, Xiangyuan
    International Journal of Computational Intelligence Studies, 2023, 12 (1-2) : 118 - 129
  • [32] Movie tag prediction: An extreme multi-label multi-modal transformer-based solution with explanation
    Guarascio, Massimo
    Minici, Marco
    Pisani, Francesco Sergio
    De Francesco, Erika
    Lambardi, Pasquale
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2024, 62 (04) : 1021 - 1043
  • [33] A multi-modal health data fusion and analysis method based on body sensor network
    Wang, Lei
    Chen, Yibo
    Zhao, Zhenying
    Zhao, Lingxiao
    Li, Jin
    Li, Cuimin
    INTERNATIONAL JOURNAL OF SERVICES TECHNOLOGY AND MANAGEMENT, 2019, 25 (5-6) : 474 - 491
  • [34] An Improved Multi-modal Data Decision Fusion Method Based on DS Evidence Theory
    Lu, Shengfu
    Li, Peng
    Li, Mi
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 1684 - 1690
  • [35] Image-Text Person Re-Identification with Transformer-Based Modal Fusion
    Li, Xin
    Guo, Hubo
    Zhang, Meiling
    Fu, Bo
    ELECTRONICS, 2025, 14 (03):
  • [36] A multi-modal emotion fusion classification method combined expression and speech based on attention mechanism
    Liu, Dong
    Chen, Longxi
    Wang, Lifeng
    Wang, Zhiyong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 41677 - 41695
  • [37] Multi-Modal Late Fusion Rice Seed Variety Classification Based on an Improved Voting Method
    He, Xinyi
    Cai, Qiyang
    Zou, Xiuguo
    Li, Hua
    Feng, Xuebin
    Yin, Wenqing
    Qian, Yan
    AGRICULTURE-BASEL, 2023, 13 (03):
  • [38] Visual Sorting Method Based on Multi-Modal Information Fusion
    Han, Song
    Liu, Xiaoping
    Wang, Gang
    APPLIED SCIENCES-BASEL, 2022, 12 (06):
  • [39] Multi-modal Perception Fusion Method Based on Cross Attention
    Zhang B.-L.
    Pan Z.-H.
    Jiang J.-Z.
    Zhang C.-B.
    Wang Y.-X.
    Yang C.-L.
    Zhongguo Gonglu Xuebao/China Journal of Highway and Transport, 2024, 37 (03): : 181 - 193
  • [40] Evaluation Method of Teaching Styles Based on Multi-modal Fusion
    Tang, Wen
    Wang, Chongwen
    Zhang, Yi
    2021 THE 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING, ICCIP 2021, 2021, : 9 - 15