GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction

被引:0
|
作者
Ahmed, Shahzad [1 ]
Jinchao, Feng [1 ]
Ferzund, Javed [2 ]
Ali, Muhammad Usman [2 ]
Yaqub, Muhammad [3 ]
Manan, Malik Abdul [1 ]
Mehmood, Atif [4 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Computat Intelligence & Intelligen, Beijing 100124, Peoples R China
[2] COMSATS Univ Islamabad, Dept Comp Sci, Sahiwal Campus, Sahiwal 57000, Pakistan
[3] Hunan Univ, Sch Biomed Sci, Changsha, Peoples R China
[4] Zhejiang Normal Univ, Dept Comp Sci & Technol, Jinhua 321004, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
MRI reconstruction; Zero-shot learning; Graph neural network; Medical imaging; Generative adversarial network;
D O I
10.1016/j.mri.2024.110279
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose: This study introduces GraFMRI, a novel framework designed to address the challenges of reconstructing high-quality MRI images from undersampled k-space data. Traditional methods often suffer from noise amplification and loss of structural detail, leading to suboptimal image quality. GraFMRI leverages Graph Neural Networks (GNNs) to transform multi-modal MRI data (T1, T2, PD) into a graph-based representation, enabling the model to capture intricate spatial relationships and inter-modality dependencies. Methods: The framework integrates Graph-Based Non-Local Means (NLM) Filtering for effective noise suppression and Adversarial Training to reduce artifacts. A dynamic attention mechanism enables the model to focus on key anatomical regions, even when fully-sampled reference images are unavailable. GraFMRI was evaluated on the IXI and fastMRI datasets using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) as metrics for reconstruction quality. Results: GraFMRI consistently outperforms traditional and self-supervised reconstruction techniques. Significant improvements in multi-modal fusion were observed, with better preservation of information across modalities. Noise suppression through NLM filtering and artifact reduction via adversarial training led to higher PSNR and SSIM scores across both datasets. The dynamic attention mechanism further enhanced the accuracy of the reconstructions by focusing on critical anatomical regions. Conclusion: GraFMRI provides a scalable, robust solution for multi-modal MRI reconstruction, addressing noise and artifact challenges while enhancing diagnostic accuracy. Its ability to fuse information from different MRI modalities makes it adaptable to various clinical applications, improving the quality and reliability of reconstructed images.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Leveraging multi-modal fusion for graph-based image annotation
    Amiri, S. Hamid
    Jamzad, Mansour
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 55 : 816 - 828
  • [2] Flexible Multi-modal Graph-Based Segmentation
    Sanberg, Willem P.
    Do, Luat
    de With, Peter H. N.
    ADVANCED CONCEPTS FOR INTELLIGENT VISION SYSTEMS, ACIVS 2013, 2013, 8192 : 492 - 503
  • [3] A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation
    Yin, Yongjing
    Meng, Fandong
    Su, Jinsong
    Zhou, Chulun
    Yang, Zhengyuan
    Zhou, Jie
    Luo, Jiebo
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3025 - 3035
  • [4] Graph-Based Multi-Modal Multi-View Fusion for Facial Action Unit Recognition
    Chen, Jianrong
    Dey, Sujit
    IEEE ACCESS, 2024, 12 : 69310 - 69324
  • [5] Graph-Based Hand-Object Meshes and Poses Reconstruction With Multi-Modal Input
    Almadani, Murad
    Elhayek, Ahmed
    Malik, Jameel
    Stricker, Didier
    IEEE ACCESS, 2021, 9 : 136438 - 136447
  • [6] Graph-Based Hand-Object Meshes and Poses Reconstruction with Multi-Modal Input
    Almadani, Murad
    Elhayek, Ahmed
    Malik, Jameel
    Stricker, Didier
    IEEE Access, 2021, 9 : 136438 - 136447
  • [7] Semantic2Graph: graph-based multi-modal feature fusion for action segmentation in videos
    Junbin Zhang
    Pei-Hsuan Tsai
    Meng-Hsun Tsai
    Applied Intelligence, 2024, 54 : 2084 - 2099
  • [8] Semantic2Graph: graph-based multi-modal feature fusion for action segmentation in videos
    Zhang, Junbin
    Tsai, Pei-Hsuan
    Tsai, Meng-Hsun
    APPLIED INTELLIGENCE, 2024, 54 (02) : 2084 - 2099
  • [9] Designing a graph-based framework to support a multi-modal approach for music information retrieval
    Jia-Lien Hsu
    Chien-Chang Huang
    Multimedia Tools and Applications, 2015, 74 : 5401 - 5427
  • [10] Designing a graph-based framework to support a multi-modal approach for music information retrieval
    Hsu, Jia-Lien
    Huang, Chien-Chang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (15) : 5401 - 5427