Boosting Noise Reduction Effect via Unsupervised Fine-Tuning Strategy

被引:1
|
作者
Jiang, Xinyi [1 ]
Xu, Shaoping [1 ]
Wu, Junyun [1 ]
Zhou, Changfei [1 ]
Ji, Shuichen [1 ]
机构
[1] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 05期
关键词
boosting denoising effect; supervised denoising models; data bias; unsupervised denoising models; flexibility; fine-tuning; IMAGE; SPARSE;
D O I
10.3390/app14051742
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Over the last decade, supervised denoising models, trained on extensive datasets, have exhibited remarkable performance in image denoising, owing to their superior denoising effects. However, these models exhibit limited flexibility and manifest varying degrees of degradation in noise reduction capability when applied in practical scenarios, particularly when the noise distribution of a given noisy image deviates from that of the training images. To tackle this problem, we put forward a two-stage denoising model that is actualized by attaching an unsupervised fine-tuning phase after a supervised denoising model processes the input noisy image and secures a denoised image (regarded as a preprocessed image). More specifically, in the first stage we replace the convolution block adopted by the U-shaped network framework (utilized in the deep image prior method) with the Transformer module, and the resultant model is referred to as a U-Transformer. The U-Transformer model is trained to preprocess the input noisy images using noisy images and their labels. As for the second stage, we condense the supervised U-Transformer model into a simplified version, incorporating only one Transformer module with fewer parameters. Additionally, we shift its training mode to unsupervised training, following a similar approach as employed in the deep image prior method. This stage aims to further eliminate minor residual noise and artifacts present in the preprocessed image, resulting in clearer and more realistic output images. Experimental results illustrate that the proposed method achieves significant noise reduction in both synthetic and real images, surpassing state-of-the-art methods. This superiority stems from the supervised model's ability to rapidly process given noisy images, while the unsupervised model leverages its flexibility to generate a fine-tuned network, enhancing noise reduction capability. Moreover, with support from the supervised model providing higher-quality preprocessed images, the proposed unsupervised fine-tuning model requires fewer parameters, facilitating rapid training and convergence, resulting in overall high execution efficiency.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Boosting fine-tuning via Conditional Online Knowledge Transfer
    Liu, Zhiqiang
    Li, Yuhong
    Huang, Chengkai
    Luo, KunTing
    Liu, Yanxia
    NEURAL NETWORKS, 2024, 169 : 325 - 333
  • [2] Bagging and Boosting Fine-Tuning for Ensemble Learning
    Zhao C.
    Peng R.
    Wu D.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1728 - 1742
  • [3] Boosting with fine-tuning for deep image denoising
    Xie, Zhonghua
    Liu, Lingjun
    Wang, Cheng
    Chen, Zehong
    SIGNAL PROCESSING, 2024, 217
  • [4] Fine-tuning the selection of a reperfusion strategy
    Van de Werf, Frans J.
    CIRCULATION, 2006, 114 (19) : 2002 - 2003
  • [5] Fine-Tuning CLIP via Explainability Map Propagation for Boosting Image and Video Retrieval
    Shalev, Yoav
    Wolf, Lior
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT I, 2024, 14608 : 356 - 370
  • [6] Boosting Query Efficiency of Meta Attack With Dynamic Fine-Tuning
    Lin, Da
    Wang, Yuan-Gen
    Tang, Weixuan
    Kang, Xiangui
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2557 - 2561
  • [7] Boosting generalization of fine-tuning BERT for fake news detection
    Qin, Simeng
    Zhang, Mingli
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (04)
  • [8] Boosting Rice Yield by Fine-Tuning SPL Gene Expression
    Wang, Lei
    Zhang, Qifa
    TRENDS IN PLANT SCIENCE, 2017, 22 (08) : 643 - 646
  • [9] Unsupervised Person Re-identification: Clustering and Fine-tuning
    Fan, Hehe
    Zheng, Liang
    Yan, Chenggang
    Yang, Yi
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2018, 14 (04)
  • [10] Factorized Convolutional Networks: Unsupervised Fine-Tuning for Image Clustering
    Gui, Liang-Yan
    Gui, Liangke
    Wang, Yu-Xiong
    Morency, Louis-Philippe
    Moura, Jose M. F.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 1205 - 1214