Region-Guided Channel-Wise Attention Network for Accelerated MRI Reconstruction

被引:1
|
作者
Liu, Jingshuai [1 ]
Qin, Chen [1 ]
Yaghoobi, Mehrdad [1 ]
机构
[1] Univ Edinburgh, IDCOM, Sch Engn, Edinburgh, Midlothian, Scotland
关键词
MRI reconstruction; Deep learning; Region-guided channel-wise attention; COMPRESSED SENSING MRI;
D O I
10.1007/978-3-031-21014-3_3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Magnetic resonance imaging (MRI) has been widely used in clinical practice for medical diagnosis of diseases. However, the long acquisition time hinders its development in time-critical applications. In recent years, deep learning-based methods leverage the powerful representations of neural networks to recover high-quality MR images from undersampled measurements, which shortens the acquisition process and enables accelerated MRI scanning. Despite the achieved inspiring success, it is still challenging to provide high-fidelity reconstructions under high acceleration factors. As an important mechanism in deep neural networks, attention modules have been used to improve the reconstruction quality. Due to the computational costs, many attention modules are not suitable for applying to high-resolution features or to capture spatial information, which potentially limits the capacity of neural networks. To address this issue, we propose a novel channel-wise attention which is implemented under the guidance of implicitly learned spatial semantics. We incorporate the proposed attention module in a deep network cascade for fast MRI reconstruction. In experiments, we demonstrate that the proposed framework produces superior reconstructions with appealing local visual details, compared to other deep learning-based models, validated qualitatively and quantitatively on the FastMRI knee dataset.
引用
收藏
页码:21 / 31
页数:11
相关论文
共 50 条
  • [1] MRI RECONSTRUCTION VIA CASCADED CHANNEL-WISE ATTENTION NETWORK
    Huang, Qiaoying
    Yang, Dong
    Wu, Pengxiang
    Qu, Hui
    Yi, Jingru
    Metaxas, Dimitris
    2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), 2019, : 1622 - 1626
  • [2] A Modified Generative Adversarial Network Using Spatial and Channel-Wise Attention for CS-MRI Reconstruction
    Li, Guangyuan
    Lv, Jun
    Wang, Chengyan
    IEEE ACCESS, 2021, 9 : 83185 - 83198
  • [3] CHANNEL-WISE TEMPORAL ATTENTION NETWORK FOR VIDEO ACTION RECOGNITION
    Lei, Jianjun
    Jia, Yalong
    Peng, Bo
    Huang, Qingming
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 562 - 567
  • [4] Spatial-temporal channel-wise attention network for action recognition
    Lin Chen
    Yungang Liu
    Yongchao Man
    Multimedia Tools and Applications, 2021, 80 : 21789 - 21808
  • [5] Spatial-temporal channel-wise attention network for action recognition
    Chen, Lin
    Liu, Yungang
    Man, Yongchao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (14) : 21789 - 21808
  • [6] High-Performance Light Field Reconstruction with Channel-wise and SAI-wise Attention
    Hu, Zexi
    Chung, Yuk Ying
    Zandavi, Seid Miad
    Ouyang, Wanli
    He, Xiangjian
    Gao, Yuefang
    NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 118 - 126
  • [7] Automated Heartbeat Classification Exploiting Convolutional Neural Network With Channel-Wise Attention
    Li, Feiteng
    Wu, Jiaquan
    Jia, Menghan
    Chen, Zhijian
    Pu, Yu
    IEEE ACCESS, 2019, 7 : 122955 - 122963
  • [8] Region-Guided and Dual Attention Discriminative Learning Network for Hyperspectral Target Detection
    Zhong J.-P.
    Li Y.-S.
    Xie W.-Y.
    Lei J.
    Paolo G.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2024, 52 (05): : 1716 - 1729
  • [9] CAISFormer: Channel-wise attention transformer for image steganography
    Zhou, Yuhang
    Luo, Ting
    He, Zhouyan
    Jiang, Gangyi
    Xu, Haiyong
    Chang, Chin-Chen
    NEUROCOMPUTING, 2024, 603
  • [10] CarveNet: a channel-wise attention-based network for irregular scene text recognition
    Guibin Wu
    Zheng Zhang
    Yongping Xiong
    International Journal on Document Analysis and Recognition (IJDAR), 2022, 25 : 177 - 186