Deep Image Prior Acceleration Method for Target Offset in Low-dose CT Images Denoising

被引:0
|
作者
Zeng, Li [1 ,2 ]
Xiong, Xilin [1 ,2 ]
Chen, Wei [3 ]
机构
[1] Chongqing Univ, Coll Math & Stat, Chongqing 401331, Peoples R China
[2] Chongqing Univ, Engn Res Ctr Ind Computed Tomog Nondestruct Testin, Educ Minist China, Chongqing 400044, Peoples R China
[3] Army Med Univ, Southwest Hosp, Dept Radiol, Chongqing 400038, Peoples R China
基金
中国国家自然科学基金;
关键词
Image denoising; Low Dose CT (LDCT); Deep learning; Deep Image Prior (DIP); Acceleration method;
D O I
10.11999/JEIT220551
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Low Dose CT (LDCT) images can significantly reduce the X-ray radiation dose, but there is a lot of noise that affects doctors' diagnosis. Deep Image Prior (DIP) is an unsupervised deep learning algorithm that uses random tensor as the input of neural network and iterates with a single LDCT image as the target. However, DIP needs thousands of iterations to get the best denoised results, resulting in the slow running speed of this method. Therefore, a DIP acceleration method for target offset in low-dose CT images is proposed, which aims to improve the running speed while maintaining the quality of denoised image. According to the similarity of LDCT slice images of an organ (such as lungs), the algorithm associates independent networks whose target images are different slices by inheriting parameters, updates the network parameters corresponding to the current slice based on the network parameters corresponding to the previous slice, and takes the network parameters corresponding to the current slice as the basis of next network corresponding to next slice to update parameters; Since the input of DIP network is a fixed random tensor, which is different from the target image greatly, this paper uses the LDCT image preprocessed by the traditional models as the network input to improve further the network iteration speed. Experiments show that the proposed acceleration algorithm can improve the iteration speed by 10.45% compared with the original DIP network without traditional model preprocessing. When LDCT preprocessed by Relative Total Variation (RTV) model is used as the network input, the image peak signal-to-noise ratio can not only reach 29.13, but also the total iterative speed can be increased by 94.31%. Therefore, this algorithm can greatly improve the running speed while maintaining the denoised quality of DIP, especially when the CT image preprocessed by RTV model is used as the network input, the effect of improving the running speed is more obvious.
引用
收藏
页码:2188 / 2196
页数:9
相关论文
共 16 条
  • [1] Deep Interactive Denoiser (DID) for X-Ray Computed Tomography
    Bai, Ti
    Wang, Biling
    Nguyen, Dan
    Wang, Bao
    Dong, Bin
    Cong, Wenxiang
    Kalra, Mannudeep K.
    Jiang, Steve
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (11) : 2965 - 2975
  • [2] A Bayesian Perspective on the Deep Image Prior
    Cheng, Zezhou
    Gadelha, Matheus
    Maji, Subhransu
    Sheldon, Daniel
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5438 - 5446
  • [3] PET image denoising using unsupervised deep learning
    Cui, Jianan
    Gong, Kuang
    Guo, Ning
    Wu, Chenxi
    Meng, Xiaxia
    Kim, Kyungsang
    Zheng, Kun
    Wu, Zhifang
    Fu, Liping
    Xu, Baixuan
    Zhu, Zhaohui
    Tian, Jiahe
    Liu, Huafeng
    Li, Quanzheng
    [J]. EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2019, 46 (13) : 2780 - 2789
  • [4] Image denoising by sparse 3-D transform-domain collaborative filtering
    Dabov, Kostadin
    Foi, Alessandro
    Katkovnik, Vladimir
    Egiazarian, Karen
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2007, 16 (08) : 2080 - 2095
  • [5] Regularization by Architecture: A Deep Prior Approach for Inverse Problems
    Dittmer, Soren
    Kluth, Tobias
    Maass, Peter
    Baguer, Daniel Otero
    [J]. JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2020, 62 (03) : 456 - 470
  • [6] Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets
    Hashimoto, Fumio
    Ohba, Hiroyuki
    Ote, Kibo
    Teramoto, Atsushi
    Tsukada, Hideo
    [J]. IEEE ACCESS, 2019, 7 : 96594 - 96603
  • [7] Mataev G, 2019, Arxiv, DOI arXiv:1903.10176
  • [8] Moran Nick, 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings, P12061, DOI 10.1109/CVPR42600.2020.01208
  • [9] Anatomical-guided attention enhances unsupervised PET image denoising performance
    Onishi, Yuya
    Hashimoto, Fumio
    Ote, Kibo
    Ohba, Hiroyuki
    Ota, Ryosuke
    Yoshikawa, Etsuji
    Ouchi, Yasuomi
    [J]. MEDICAL IMAGE ANALYSIS, 2021, 74
  • [10] Exploiting Deep Generative Prior for Versatile Image Restoration and Manipulation
    Pan, Xingang
    Zhan, Xiaohang
    Dai, Bo
    Lin, Dahua
    Loy, Chen Change
    Luo, Ping
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 7474 - 7489