FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment

被引:0
|
作者
Chen, Xuanmin [1 ]
Ma, Liyan [1 ,2 ]
Ying, Shihui [3 ]
Shen, Dinggang [4 ,5 ]
Zeng, Tieyong [6 ]
机构
[1] Shanghai Univ, Sch Comp Engn & Sci, Shanghai 200444, Peoples R China
[2] Shanghai Univ, Schoolof Mechatron Engn & Automat, Shanghai Key Laboratoryof Intelligent Mfg & Robot, Shanghai 200444, Peoples R China
[3] Shanghai Univ, Sch Sci, Dept Math, Shanghai 200444, Peoples R China
[4] ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
[5] Shanghai United Imaging Intelligence Co Ltd, Shanghai 200030, Peoples R China
[6] Chinese Univ Hong Kong, Ctr Math Artificial Intelligence, Dept Math, Hong Kong, Peoples R China
基金
国家重点研发计划;
关键词
Image reconstruction; Magnetic resonance imaging; Convolution; Training; Imaging; Frequency-domain analysis; Compressed sensing; MRI reconstruction; multi-modal feature alignment; feature refinement; IMAGE-RECONSTRUCTION; NEURAL-NETWORK; CONTRAST MRI; TRANSFORMER;
D O I
10.1109/JBHI.2024.3432139
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.
引用
收藏
页码:6751 / 6763
页数:13
相关论文
共 50 条
  • [41] GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction
    Ahmed, Shahzad
    Jinchao, Feng
    Ferzund, Javed
    Ali, Muhammad Usman
    Yaqub, Muhammad
    Manan, Malik Abdul
    Mehmood, Atif
    MAGNETIC RESONANCE IMAGING, 2025, 116
  • [42] Disambiguity and Alignment: An Effective Multi-Modal Alignment Method for Cross-Modal Recipe Retrieval
    Zou, Zhuoyang
    Zhu, Xinghui
    Zhu, Qinying
    Zhang, Hongyan
    Zhu, Lei
    FOODS, 2024, 13 (11)
  • [43] MMEA: Entity Alignment for Multi-modal Knowledge Graph
    Chen, Liyi
    Li, Zhi
    Wang, Yijun
    Xu, Tong
    Wang, Zhefeng
    Chen, Enhong
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT I, 2020, 12274 : 134 - 147
  • [44] Gromov-Wasserstein Multi-modal Alignment and Clustering
    Gong, Fengjiao
    Nie, Yuzhou
    Xu, Hongteng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 603 - 613
  • [45] Semantic Alignment Network for Multi-Modal Emotion Recognition
    Hou, Mixiao
    Zhang, Zheng
    Liu, Chang
    Lu, Guangming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 5318 - 5329
  • [46] Multi-modal Reconstruction in Brain Perfusion SPECT
    Vija, Alexander Hans
    Cachovan, Michal
    JOURNAL OF NUCLEAR MEDICINE, 2019, 60
  • [47] Progressively Modality Freezing for Multi-Modal Entity Alignment
    Huang, Yani
    Zhang, Xuefeng
    Zhang, Richong
    Chen, Junfan
    Kim, Jaein
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 3477 - 3489
  • [48] Heterogeneous Feature Selection With Multi-Modal Deep Neural Networks and Sparse Group LASSO
    Zhao, Lei
    Hu, Qinghua
    Wang, Wenwu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (11) : 1936 - 1948
  • [49] Multi-modal voice pathology detection architecture based on deep and handcrafted feature fusion
    Omeroglu, Asli Nur
    Mohammed, Hussein M. A.
    Oral, Emin Argun
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2022, 36
  • [50] MAFE: Multi-modal Alignment via Mutual Information Maximum Perspective in Multi-modal Fake News Detection
    Qin, Haimei
    Jing, Yaqi
    Duan, Yunqiang
    Jiang, Lei
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1515 - 1521