An Attentive Multi-Modal CNN for Brain Tumor Radiogenomic Classification

被引:11
|
作者
Qu, Ruyi [1 ]
Xiao, Zhifeng [2 ]
机构
[1] Univ Toronto, Dept Math, Toronto, ON M5S 2E4, Canada
[2] Penn State Erie, Behrend Coll, Sch Engn, Erie, PA 16563 USA
关键词
multi-modal medical image; image classification; brain tumor; MGMT METHYLATION STATUS;
D O I
10.3390/info13030124
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Medical images of brain tumors are critical for characterizing the pathology of tumors and early diagnosis. There are multiple modalities for medical images of brain tumors. Fusing the unique features of each modality of the magnetic resonance imaging (MRI) scans can accurately determine the nature of brain tumors. The current genetic analysis approach is time-consuming and requires surgical extraction of brain tissue samples. Accurate classification of multi-modal brain tumor images can speed up the detection process and alleviate patient suffering. Medical image fusion refers to effectively merging the significant information of multiple source images of the same tissue into one image, which will carry abundant information for diagnosis. This paper proposes a novel attentive deep-learning-based classification model that integrates multi-modal feature aggregation, lite attention mechanism, separable embedding, and modal-wise shortcuts for performance improvement. We evaluate our model on the RSNA-MICCAI dataset, a scenario-specific medical image dataset, and demonstrate that the proposed method outperforms the state-of-the-art (SOTA) by around 3%.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] BTDNet: A Multi-Modal Approach for Brain Tumor Radiogenomic Classification
    Kollias, Dimitrios
    Vendal, Karanjot
    Gadhavi, Priyankaben
    Russom, Solomon
    APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [2] An attentive, multi-modal laser "eye"
    Frintrop, S
    Rome, E
    Nüchter, A
    Surmann, H
    COMPUTER VISION SYSTEMS, PROCEEDINGS, 2003, 2626 : 202 - 211
  • [3] Affective Interaction: Attentive Representation Learning for Multi-Modal Sentiment Classification
    Zhang, Yazhou
    Tiwari, Prayag
    Rong, Lu
    Chen, Rui
    Alnajem, Nojoom A.
    Hossain, M. Shamim
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (03)
  • [4] Revolutionizing Brain Tumor Analysis: A Fusion of ChatGPT and Multi-Modal CNN for Unprecedented Precision
    Rawas, Soha
    Samala, Agariadne Dwinggo
    INTERNATIONAL JOURNAL OF ONLINE AND BIOMEDICAL ENGINEERING, 2024, 20 (08) : 37 - 48
  • [5] Multi-modal Transformer for Brain Tumor Segmentation
    Cho, Jihoon
    Park, Jinah
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2022, 2023, 13769 : 138 - 148
  • [6] Multi-modal PixelNet for Brain Tumor Segmentation
    Islam, Mobarakol
    Ren, Hongliang
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2017, 2018, 10670 : 298 - 308
  • [7] Efficient CNN Architecture for Multi-modal Aerial View Object Classification
    Miron, Casian
    Pasarica, Alexandru
    Timofte, Radu
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 560 - 565
  • [8] A Deep Multi-Modal CNN for Multi-Instance Multi-Label Image Classification
    Song, Lingyun
    Liu, Jun
    Qian, Buyue
    Sun, Mingxuan
    Yang, Kuan
    Sun, Meng
    Abbas, Samar
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (12) : 6025 - 6038
  • [9] Multi-modal Extreme Classification
    Mittal, Anshul
    Dahiya, Kunal
    Malani, Shreya
    Ramaswamy, Janani
    Kuruvilla, Seba
    Ajmera, Jitendra
    Chang, Keng-Hao
    Agarwal, Sumeet
    Kar, Purushottam
    Varma, Manik
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12383 - 12392
  • [10] Incomplete multi-modal brain image fusion for epilepsy classification
    Zhu, Qi
    Li, Huijie
    Ye, Haizhou
    Zhang, Zhiqiang
    Wang, Ran
    Fan, Zizhu
    Zhang, Daoqiang
    INFORMATION SCIENCES, 2022, 582 : 316 - 333