FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment

被引:0
|
作者
Chen, Xuanmin [1 ]
Ma, Liyan [1 ,2 ]
Ying, Shihui [3 ]
Shen, Dinggang [4 ,5 ]
Zeng, Tieyong [6 ]
机构
[1] Shanghai Univ, Sch Comp Engn & Sci, Shanghai 200444, Peoples R China
[2] Shanghai Univ, Schoolof Mechatron Engn & Automat, Shanghai Key Laboratoryof Intelligent Mfg & Robot, Shanghai 200444, Peoples R China
[3] Shanghai Univ, Sch Sci, Dept Math, Shanghai 200444, Peoples R China
[4] ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
[5] Shanghai United Imaging Intelligence Co Ltd, Shanghai 200030, Peoples R China
[6] Chinese Univ Hong Kong, Ctr Math Artificial Intelligence, Dept Math, Hong Kong, Peoples R China
基金
国家重点研发计划;
关键词
Image reconstruction; Magnetic resonance imaging; Convolution; Training; Imaging; Frequency-domain analysis; Compressed sensing; MRI reconstruction; multi-modal feature alignment; feature refinement; IMAGE-RECONSTRUCTION; NEURAL-NETWORK; CONTRAST MRI; TRANSFORMER;
D O I
10.1109/JBHI.2024.3432139
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.
引用
收藏
页码:6751 / 6763
页数:13
相关论文
共 50 条
  • [31] Multi-modal facial expression feature based on deep-neural networks
    Wei Wei
    Qingxuan Jia
    Yongli Feng
    Gang Chen
    Ming Chu
    Journal on Multimodal User Interfaces, 2020, 14 : 17 - 23
  • [32] MLMFNet: A multi-level modality fusion network for multi-modal accelerated MRI reconstruction
    Zhou, Xiuyun
    Zhang, Zhenxi
    Du, Hongwei
    Qiu, Bensheng
    MAGNETIC RESONANCE IMAGING, 2024, 111 : 246 - 255
  • [33] LCEMH: Label Correlation Enhanced Multi-modal Hashing for efficient multi-modal retrieval
    Zheng, Chaoqun
    Zhu, Lei
    Zhang, Zheng
    Duan, Wenjun
    Lu, Wenpeng
    INFORMATION SCIENCES, 2024, 659
  • [34] Biometrics and forensics integration using deep multi-modal semantic alignment and joint embedding
    Toor, Andeep S.
    Wechsler, Harry
    PATTERN RECOGNITION LETTERS, 2018, 113 : 29 - 37
  • [35] Multi-Modal Deep Analysis for Multimedia
    Zhu, Wenwu
    Wang, Xin
    Li, Hongzhi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (10) : 3740 - 3764
  • [36] Deep-Learning-Based Multi-Modal Fusion for Fast MR Reconstruction
    Xiang, Lei
    Chen, Yong
    Chang, Weitang
    Zhan, Yiqiang
    Lin, Weili
    Wang, Qian
    Shen, Dinggang
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2019, 66 (07) : 2105 - 2114
  • [37] Fine-Grained Image Classification Based on Multi-Modal Features and Enhanced Alignment
    Han, Jing
    Zhang, Tianpeng
    Lyu, Xueqiang
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2024, 47 (04): : 130 - 135
  • [38] Multi-modal Differentiable Unsupervised Feature Selection
    Yang, Junchen
    Lindenbaum, Ofir
    Kluger, Yuval
    Jaffe, Ariel
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2400 - 2410
  • [39] Multi-modal Feature Integration for Secure Authentication
    Kang, Hang-Bong
    Ju, Myung-Ho
    INTELLIGENT COMPUTING, PART I: INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING, ICIC 2006, PART I, 2006, 4113 : 1191 - 1200
  • [40] A Multi-Modal Entity Alignment Method with Inter-Modal Enhancement
    Yuan, Song
    Lu, Zexin
    Li, Qiyuan
    Gu, Jinguang
    BIG DATA AND COGNITIVE COMPUTING, 2023, 7 (02)