Rapid diagnosis of lung cancer by multi-modal spectral data combined with deep learning

被引:0
|
作者
Xu, Han [1 ]
Lv, Ruichan [1 ]
机构
[1] Xidian Univ, Sch Electromech Engn, State Key Lab Electromech Integrated Mfg High perf, Xian 710071, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Detection of lung adenocarcinoma; Spectral detection; Information fusion;
D O I
10.1016/j.saa.2025.125997
中图分类号
O433 [光谱学];
学科分类号
0703 ; 070302 ;
摘要
Lung cancer is a malignant tumor that poses a serious threat to human health. Existing lung cancer diagnostic techniques face the challenges of high cost and slow diagnosis. Early and rapid diagnosis and treatment are essential to improve the outcome of lung cancer. In this study, a deep learning-based multi-modal spectral information fusion (MSIF) network is proposed for lung adenocarcinoma cell detection. First, multi-modal data of Fourier transform infrared spectra, UV-vis absorbance spectra, and fluorescence spectra of normal and patient cells were collected. Subsequently, the spectral text data were efficiently processed by one-dimensional convolutional neural network. The global and local features of the spectral images are deeply mined by the hybrid model of ResNet and Transformer. An adaptive depth-wise convolution (ADConv) is introduced to be applied to feature extraction, overcoming the shortcomings of conventional convolution. In order to achieve feature learning between multi-modalities, a cross-modal interaction fusion (CMIF) module is designed. This module fuses the extracted spectral image and text features in a multi-faceted interaction, enabling full utilization of multi-modal features through feature sharing. The method demonstrated excellent performance on the test sets of Fourier transform infrared spectra, UV-vis absorbance spectra and fluorescence spectra, achieving 95.83 %, 97.92 % and 100 % accuracy, respectively. In addition, experiments validate the superiority of multi-modal spectral data and the robustness of the model generalization capability. This study not only provides strong
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Cardiovascular disease detection based on deep learning and multi-modal data fusion
    Zhu, Jiayuan
    Liu, Hui
    Liu, Xiaowei
    Chen, Chao
    Shu, Minglei
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 99
  • [22] Detecting glaucoma from multi-modal data using probabilistic deep learning
    Huang, Xiaoqin
    Sun, Jian
    Gupta, Krati
    Montesano, Giovanni
    Crabb, David P.
    Garway-Heath, David F.
    Brusini, Paolo
    Lanzetta, Paolo
    Oddone, Francesco
    Turpin, Andrew
    McKendrick, Allison M.
    Johnson, Chris A.
    Yousefi, Siamak
    FRONTIERS IN MEDICINE, 2022, 9
  • [23] Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning
    Hssayeni, Murtadha D.
    Ghoraani, Behnaz
    IEEE ACCESS, 2021, 9 : 21642 - 21652
  • [24] Multi-Modal Deep Learning for Vehicle Sensor Data Abstraction and Attack Detection
    Rofail, Mark
    Alsafty, Aysha
    Matousek, Matthias
    Kargl, Frank
    2019 IEEE INTERNATIONAL CONFERENCE OF VEHICULAR ELECTRONICS AND SAFETY (ICVES 19), 2019,
  • [25] Deep learning approaches for multi-modal sensor data analysis and abnormality detection
    Jadhav, Santosh Pandurang
    Srinivas, Angalkuditi
    Dipak Raghunath, Patil
    Ramkumar Prabhu, M.
    Suryawanshi, Jaya
    Haldorai, Anandakumar
    Measurement: Sensors, 33
  • [26] Integrating Transfer Learning and Deep Neural Networks for Accurate Medical Disease Diagnosis from Multi-Modal Data
    Kaur, Chamandeep
    Al-Ansari, Abdul Rahman Mohammed
    Gongada, Taviti Naidu
    Saravanan, K. Aanandha
    Rao, Divvela Srinivasa
    Borda, Ricardo Fernando Cosio
    Manikandan, R.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (08) : 518 - 528
  • [27] Learning to Hash on Partial Multi-Modal Data
    Wang, Qifan
    Si, Luo
    Shen, Bin
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3904 - 3910
  • [28] Deep learning supported breast cancer classification with multi-modal image fusion
    Hamdy, Eman
    Zaghloul, Mohamed Saad
    Badawy, Osama
    2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, : 319 - 325
  • [29] Multi-modal advanced deep learning architectures for breast cancer survival prediction
    Arya, Nikhilanand
    Saha, Sriparna
    KNOWLEDGE-BASED SYSTEMS, 2021, 221
  • [30] Systematic comparison of deep-learning based fusion strategies for multi-modal ultrasound in diagnosis of liver cancer
    Li, Ming-De
    Li, Wei
    Lin, Man-Xia
    Lin, Xin-Xin
    Hu, Hang-Tong
    Wang, Ying-Chen
    Ruan, Si-Min
    Huang, Ze-Rong
    Lu, Rui-Fang
    Li, Lv
    Kuang, Ming
    Lu, Ming-De
    Chen, Li-Da
    Wang, Wei
    Huang, Qing-hua
    NEUROCOMPUTING, 2024, 603