Boosting Noise Reduction Effect via Unsupervised Fine-Tuning Strategy

被引:1
|
作者
Jiang, Xinyi [1 ]
Xu, Shaoping [1 ]
Wu, Junyun [1 ]
Zhou, Changfei [1 ]
Ji, Shuichen [1 ]
机构
[1] Nanchang Univ, Sch Math & Comp Sci, Nanchang 330031, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 05期
关键词
boosting denoising effect; supervised denoising models; data bias; unsupervised denoising models; flexibility; fine-tuning; IMAGE; SPARSE;
D O I
10.3390/app14051742
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Over the last decade, supervised denoising models, trained on extensive datasets, have exhibited remarkable performance in image denoising, owing to their superior denoising effects. However, these models exhibit limited flexibility and manifest varying degrees of degradation in noise reduction capability when applied in practical scenarios, particularly when the noise distribution of a given noisy image deviates from that of the training images. To tackle this problem, we put forward a two-stage denoising model that is actualized by attaching an unsupervised fine-tuning phase after a supervised denoising model processes the input noisy image and secures a denoised image (regarded as a preprocessed image). More specifically, in the first stage we replace the convolution block adopted by the U-shaped network framework (utilized in the deep image prior method) with the Transformer module, and the resultant model is referred to as a U-Transformer. The U-Transformer model is trained to preprocess the input noisy images using noisy images and their labels. As for the second stage, we condense the supervised U-Transformer model into a simplified version, incorporating only one Transformer module with fewer parameters. Additionally, we shift its training mode to unsupervised training, following a similar approach as employed in the deep image prior method. This stage aims to further eliminate minor residual noise and artifacts present in the preprocessed image, resulting in clearer and more realistic output images. Experimental results illustrate that the proposed method achieves significant noise reduction in both synthetic and real images, surpassing state-of-the-art methods. This superiority stems from the supervised model's ability to rapidly process given noisy images, while the unsupervised model leverages its flexibility to generate a fine-tuned network, enhancing noise reduction capability. Moreover, with support from the supervised model providing higher-quality preprocessed images, the proposed unsupervised fine-tuning model requires fewer parameters, facilitating rapid training and convergence, resulting in overall high execution efficiency.
引用
收藏
页数:19
相关论文
共 50 条
  • [11] Soft-competitive-growing classifier with unsupervised fine-tuning
    Alba, JL
    Docio, L
    Ruibal, S
    1997 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, 1997, : 1418 - 1423
  • [12] Quantum device fine-tuning using unsupervised embedding learning
    van Esbroeck, N. M.
    Lennon, D. T.
    Moon, H.
    Nguyen, V
    Vigneau, F.
    Camenzind, L. C.
    Yu, L.
    Zumbuehl, D. M.
    Briggs, G. A. D.
    Sejdinovic, D.
    Ares, N.
    NEW JOURNAL OF PHYSICS, 2020, 22 (09):
  • [13] Fine-tuning the diradical character of molecular systems via the heteroatom effect
    Wang, Wenxiang
    Ge, Lingbing
    Xue, Guodong
    Miao, Fang
    Chen, Pingan
    Chen, Hanjiao
    Lin, Yiwei
    Ni, Yong
    Xiong, Jingyuan
    Hu, Yuanyuan
    Wu, Jishan
    Zheng, Yonghao
    CHEMICAL COMMUNICATIONS, 2020, 56 (09) : 1405 - 1408
  • [14] Attending via both Fine-tuning and Compressing
    Zhou, Jie
    Wu, Yuanbin
    Chen, Qin
    Huang, Xuanjing
    He, Liang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2152 - 2161
  • [15] Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance
    Wang, Song
    Tan, Zhen
    Guo, Ruocheng
    Li, Jundong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12528 - 12540
  • [16] Fine-tuning of noise in gene expression with nucleosome remodeling
    Megaridis, Melina R.
    Lu, Yiyang
    Tevonian, Erin N.
    Junger, Kendall M.
    Moy, Jennifer M.
    Bohn-Wippert, Kathrin
    Dar, Roy D.
    APL BIOENGINEERING, 2018, 2 (02):
  • [17] Noise Stability Regularization for Improving BERT Fine-tuning
    Hua, Hang
    Li, Xingjian
    Dou, Dejing
    Xu, Chengzhong
    Luo, Jiebo
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3229 - 3241
  • [18] A transformer fine-tuning strategy for text dialect identification
    Mohammad Ali Humayun
    Hayati Yassin
    Junaid Shuja
    Abdullah Alourani
    Pg Emeroylariffion Abas
    Neural Computing and Applications, 2023, 35 : 6115 - 6124
  • [19] A transformer fine-tuning strategy for text dialect identification
    Humayun, Mohammad Ali
    Yassin, Hayati
    Shuja, Junaid
    Alourani, Abdullah
    Abas, Pg Emeroylariffion
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (08): : 6115 - 6124