CS-MRI Reconstruction Using an Improved GAN with Dilated Residual Networks and Channel Attention Mechanism

被引:3
|
作者
Li, Xia [1 ]
Zhang, Hui [1 ]
Yang, Hao [1 ]
Li, Tie-Qiang [2 ,3 ]
机构
[1] China Jiliang Univ, Coll Informat Engn, Hangzhou 310018, Peoples R China
[2] Karolinska Inst, Dept Clin Sci Intervent & Technol, S-14186 Stockholm, Sweden
[3] Karolinska Univ Hosp, Dept Med Radiat & Nucl Med, S-17176 Stockholm, Sweden
基金
浙江省自然科学基金;
关键词
compressed sensing MRI; GAN; U-net; dilated residual blocks; channel attention mechanism; GENERATIVE ADVERSARIAL NETWORK;
D O I
10.3390/s23187685
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Compressed sensing (CS) MRI has shown great potential in enhancing time efficiency. Deep learning techniques, specifically generative adversarial networks (GANs), have emerged as potent tools for speedy CS-MRI reconstruction. Yet, as the complexity of deep learning reconstruction models increases, this can lead to prolonged reconstruction time and challenges in achieving convergence. In this study, we present a novel GAN-based model that delivers superior performance without the model complexity escalating. Our generator module, built on the U-net architecture, incorporates dilated residual (DR) networks, thus expanding the network's receptive field without increasing parameters or computational load. At every step of the downsampling path, this revamped generator module includes a DR network, with the dilation rates adjusted according to the depth of the network layer. Moreover, we have introduced a channel attention mechanism (CAM) to distinguish between channels and reduce background noise, thereby focusing on key information. This mechanism adeptly combines global maximum and average pooling approaches to refine channel attention. We conducted comprehensive experiments with the designed model using public domain MRI datasets of the human brain. Ablation studies affirmed the efficacy of the modified modules within the network. Incorporating DR networks and CAM elevated the peak signal-to-noise ratios (PSNR) of the reconstructed images by about 1.2 and 0.8 dB, respectively, on average, even at 10x CS acceleration. Compared to other relevant models, our proposed model exhibits exceptional performance, achieving not only excellent stability but also outperforming most of the compared networks in terms of PSNR and SSIM. When compared with U-net, DR-CAM-GAN's average gains in SSIM and PSNR were 14% and 15%, respectively. Its MSE was reduced by a factor that ranged from two to seven. The model presents a promising pathway for enhancing the efficiency and quality of CS-MRI reconstruction.
引用
收藏
页数:16
相关论文
共 44 条
  • [21] Image Super-Resolution Using Very Deep Residual Channel Attention Networks
    Zhang, Yulun
    Li, Kunpeng
    Li, Kai
    Wang, Lichen
    Zhong, Bineng
    Fu, Yun
    COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 294 - 310
  • [22] Improved microvascular imaging with optical coherence tomography using 3D neural networks and a channel attention mechanism
    Rashidi, Mohammad
    Kalenkov, Georgy
    Green, Daniel J.
    Mclaughlin, Robert A.
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [23] Clay Mineral Image Classification Using Fusion of Improved Residual Network and Attention Mechanism
    Du, Ruishan
    Chen, Yuxin
    Meng, Lingdong
    Zhang, Tong
    Cheng, Jiaxin
    Computer Engineering and Applications, 2024, 60 (23) : 333 - 339
  • [24] Reconstruction residual network with a fused spatial-channel attention mechanism for automatically classifying diabetic foot ulcer
    Wang, Jyun-Guo
    Huang, Yu-Ting
    PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE, 2024, : 1581 - 1592
  • [25] Fragments Inpainting for Tomb Murals Using a Dual-Attention Mechanism GAN with Improved Generators
    Wu, Meng
    Chang, Xiao
    Wang, Jia
    APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [26] LCRCA: image super-resolution using lightweight concatenated residual channel attention networks
    Changmeng Peng
    Pei Shu
    Xiaoyang Huang
    Zhizhong Fu
    Xiaofeng Li
    Applied Intelligence, 2022, 52 : 10045 - 10059
  • [27] LCRCA: image super-resolution using lightweight concatenated residual channel attention networks
    Peng, Changmeng
    Shu, Pei
    Huang, Xiaoyang
    Fu, Zhizhong
    Li, Xiaofeng
    APPLIED INTELLIGENCE, 2022, 52 (09) : 10045 - 10059
  • [28] TCRAN: Multivariate time series classification using residual channel attention networks with time correction
    Zhu, Hegui
    Zhang, Jiapeng
    Cui, Hao
    Wang, Kai
    Tang, Qingsong
    APPLIED SOFT COMPUTING, 2022, 114
  • [29] Super-resolution reconstruction of terahertz images based on a deep-learning network with a residual channel attention mechanism
    Yang, Xiuwei
    Zhang, Dehai
    Wang, Zhongmin
    Zhang, Yanbo
    Wu, Jun
    Wu, Biyuan
    Wu, Xiaohu
    APPLIED OPTICS, 2022, 61 (12) : 3363 - 3370
  • [30] Image super-resolution reconstruction using Swin Transformer with efficient channel attention networks
    Sun, Zhenxi
    Zhang, Jin
    Chen, Ziyi
    Hong, Lu
    Zhang, Rui
    Li, Weishi
    Xia, Haojie
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 136