Unsupervised Pre-training Across Image Domains Improves Lung Tissue Classification

被引:52
|
作者
Schlegl, Thomas [1 ]
Ofner, Joachim [1 ]
Langs, Georg [1 ]
机构
[1] Med Univ Vienna, Dept Biomed Imaging & Image Guided Therapy, Computat Imaging Res Lab, Vienna, Austria
关键词
D O I
10.1007/978-3-319-13972-2_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The detection and classification of anomalies relevant for disease diagnosis or treatment monitoring is important during computational medical image analysis. Often, obtaining sufficient annotated training data to represent natural variability well is unfeasible. At the same time, data is frequently collected across multiple sites with heterogeneous medical imaging equipment. In this paper we propose and evaluate a semi-supervised learning approach that uses data from multiple sites (domains). Only for one small site annotations are available. We use convolutional neural networks to capture spatial appearance patterns and classify lung tissue in high-resolution computed tomography data. We perform domain adaptation via unsupervised pre-training of convolutional neural networks to inject information from sites or image classes for which no annotations are available. Results show that across site pre-training as well as pre-training on different image classes improves classification accuracy compared to random initialisation of the model parameters.
引用
收藏
页码:82 / 93
页数:12
相关论文
共 50 条
  • [1] Pre-training on Grayscale ImageNet Improves Medical Image Classification
    Xie, Yiting
    Richmond, David
    [J]. COMPUTER VISION - ECCV 2018 WORKSHOPS, PT VI, 2019, 11134 : 476 - 484
  • [2] Self-supervised pre-training improves fundus image classification for diabetic retinopathy
    Lee, Joohyung
    Lee, Eung-Joo
    [J]. REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2022, 2022, 12102
  • [3] Domain-Specific Pre-training Improves Confidence in Whole Slide Image Classification
    Chitnis, Soham Rohit
    Liu, Sidong
    Dash, Tirtharaj
    Verlekar, Tanmay Tulsidas
    Di Ieva, Antonio
    Berkovsky, Shlomo
    Vig, Lovekesh
    Srinivasan, Ashwin
    [J]. 2023 45TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY, EMBC, 2023,
  • [4] Unsupervised Pre-Training for Detection Transformers
    Dai, Zhigang
    Cai, Bolun
    Lin, Yugeng
    Chen, Junying
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (11) : 12772 - 12782
  • [5] Unsupervised Pre-Training of Image Features on Non-Curated Data
    Caron, Mathilde
    Bojanowski, Piotr
    Mairal, Julien
    Joulin, Armand
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2959 - 2968
  • [6] Unsupervised Pre-Training for Voice Activation
    Kolesau, Aliaksei
    Sesok, Dmitrij
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (23): : 1 - 13
  • [7] Image classification with quantum pre-training and auto-encoders
    Piat, Sebastien
    Usher, Nairi
    Severini, Simone
    Herbster, Mark
    Mansi, Tommaso
    Mountney, Peter
    [J]. INTERNATIONAL JOURNAL OF QUANTUM INFORMATION, 2018, 16 (08)
  • [8] Benchmarking the influence of pre-training on explanation performance in MR image classification
    Oliveira, Marta
    Wilming, Rick
    Clark, Benedict
    Budding, Celine
    Eitel, Fabian
    Ritter, Kerstin
    Haufe, Stefan
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [9] Neural speech enhancement with unsupervised pre-training and mixture training
    Hao, Xiang
    Xu, Chenglin
    Xie, Lei
    [J]. NEURAL NETWORKS, 2023, 158 : 216 - 227
  • [10] SELF PRE-TRAINING WITH MASKED AUTOENCODERS FOR MEDICAL IMAGE CLASSIFICATION AND SEGMENTATION
    Zhou, Lei
    Liu, Huidong
    Bae, Joseph
    He, Junjun
    Samaras, Dimitris
    Prasanna, Prateek
    [J]. 2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,