Domain adaptive noise reduction with iterative knowledge transfer and style generalization learning

被引:1
|
作者
Tang, Yufei [1 ,2 ]
Lyu, Tianling [3 ]
Jin, Haoyang [1 ,2 ]
Du, Qiang [1 ,2 ]
Wang, Jiping [1 ,2 ]
Li, Yunxiang [4 ]
Li, Ming [1 ,2 ]
Chen, Yang [5 ]
Zheng, Jian [1 ,2 ,6 ]
机构
[1] Univ Sci & Technol China, Sch Biomed Engn Suzhou, Div Life Sci & Med, Hefei 230026, Peoples R China
[2] Chinese Acad Sci, Suzhou Inst Biomed Engn & Technol, Med Imaging Dept, Suzhou 215163, Peoples R China
[3] Zhejiang Lab, Res Ctr Augmented Intelligence, Hangzhou 310000, Peoples R China
[4] Nanovis Technol Co Ltd, Beiqing Rd, Beijing 100094, Peoples R China
[5] Southeast Univ, Sch Comp Sci & Engn, Lab Image Sci & Technol, Nanjing 210096, Peoples R China
[6] Shandong Lab Adv Biomat & Med Devices Weihai, Weihai 264200, Peoples R China
关键词
LDCT; Domain adaptive noise reduction; Knowledge transfer; Style generalization learning; LOW-DOSE CT; GENERATIVE ADVERSARIAL NETWORK; MEDICAL IMAGE SEGMENTATION; ADAPTATION; RECONSTRUCTION;
D O I
10.1016/j.media.2024.103327
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-dose computed tomography (LDCT) denoising tasks face significant challenges in practical imaging scenarios. Supervised methods encounter difficulties in real-world scenarios as there are no paired data for training. Moreover, when applied to datasets with varying noise patterns, these methods may experience decreased performance owing to the domain gap. Conversely, unsupervised methods do not require paired data and can be directly trained on real-world data. However, they often exhibit inferior performance compared to supervised methods. To address this issue, it is necessary to leverage the strengths of these supervised and unsupervised methods. In this paper, we propose a novel domain adaptive noise reduction framework (DANRF), which integrates both knowledge transfer and style generalization learning to effectively tackle the domain gap problem. Specifically, an iterative knowledge transfer method with knowledge distillation is selected to train the target model using unlabeled target data and a pre-trained source model trained with paired simulation data. Meanwhile, we introduce the mean teacher mechanism to update the source model, enabling it to adapt to the target domain. Furthermore, an iterative style generalization learning process is also designed to enrich the style diversity of the training dataset. We evaluate the performance of our approach through experiments conducted on multi-source datasets. The results demonstrate the feasibility and effectiveness of our proposed DANRF model in multi-source LDCT image processing tasks. Given its hybrid nature, which combines the advantages of supervised and unsupervised learning, and its ability to bridge domain gaps, our approach is well-suited for improving practical low-dose CT imaging in clinical settings. Code for our proposed approach is publicly available at https://github.com/tyfeiii/DANRF.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Modeling adaptive learning agents for domain knowledge transfer
    Hoeser, Moritz
    2019 ACM/IEEE 22ND INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION (MODELS-C 2019), 2019, : 660 - 665
  • [2] Low Resource Style Transfer via Domain Adaptive Meta Learning
    Li, Xiangyang
    Long, Xiang
    Xia, Yu
    Li, Sujian
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3014 - 3026
  • [3] Domain Adaptive Text Style Transfer
    Li, Dianqi
    Zhang, Yizhe
    Gan, Zhe
    Cheng, Yu
    Brockett, Chris
    Sun, Ming-Ting
    Dolan, Bill
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 3304 - 3313
  • [4] Domain generalization by marginal transfer learning
    Blanchard, Gilles
    Deshmukh, Aniket Anand
    Dogan, Urun
    Lee, Gyemin
    Scott, Clayton
    Journal of Machine Learning Research, 2021, 22
  • [5] Domain Generalization by Marginal Transfer Learning
    Blanchard, Gilles
    Deshmukh, Aniket Anand
    Dogan, Urun
    Lee, Gyemin
    Scott, Clayton
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [6] Improving Domain Generalization in Segmentation Models with Neural Style Transfer
    Kline, Timothy L.
    2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, : 1324 - 1328
  • [7] StableFDG: Style and Attention Based Learning for Federated Domain Generalization
    Park, Jungwuk
    Han, Dong-Jun
    Kim, Jinho
    Wang, Shiqiang
    Brinton, Christopher G.
    Moon, Jaekyun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] Style Augmentation and Domain-Aware Parametric Contrastive Learning for Domain Generalization
    Li, Mingkang
    Zhang, Jiali
    Zhang, Wen
    Gong, Lu
    Zhang, Zili
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2023, 2023, 14120 : 211 - 224
  • [9] G2G: GENERALIZED LEARNING BY CROSS-DOMAIN KNOWLEDGE TRANSFER FOR FEDERATED DOMAIN GENERALIZATION
    Chen, Xinqian
    Zhang, Jin
    Gong, Xiaoli
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5150 - 5154
  • [10] Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization
    Zhang, Yabin
    Li, Minghan
    Li, Ruihuang
    Jia, Kui
    Zhang, Lei
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8025 - 8035