Leveraging human expert image annotations to improve pneumonia differentiation through human knowledge distillation

被引:0
|
作者
Daniel Schaudt
Reinhold von Schwerin
Alexander Hafner
Pascal Riedel
Christian Späte
Manfred Reichert
Andreas Hinteregger
Meinrad Beer
Christopher Kloth
机构
[1] Ulm University of Applied Science,Department of Computer Science
[2] Ulm University,Institute of Databases and Information Systems
[3] University Hospital of Ulm,Department of Radiology
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
In medical imaging, deep learning models can be a critical tool to shorten time-to-diagnosis and support specialized medical staff in clinical decision making. The successful training of deep learning models usually requires large amounts of quality data, which are often not available in many medical imaging tasks. In this work we train a deep learning model on university hospital chest X-ray data, containing 1082 images. The data was reviewed, differentiated into 4 causes for pneumonia, and annotated by an expert radiologist. To successfully train a model on this small amount of complex image data, we propose a special knowledge distillation process, which we call Human Knowledge Distillation. This process enables deep learning models to utilize annotated regions in the images during the training process. This form of guidance by a human expert improves model convergence and performance. We evaluate the proposed process on our study data for multiple types of models, all of which show improved results. The best model of this study, called PneuKnowNet, shows an improvement of + 2.3% points in overall accuracy compared to a baseline model and also leads to more meaningful decision regions. Utilizing this implicit data quality-quantity trade-off can be a promising approach for many scarce data domains beyond medical imaging.
引用
收藏
相关论文
共 50 条
  • [41] Using Intelligent Personal Annotations to Improve Human Activity Recognition for Movements in Natural Environments
    Akbari, Ali
    Castilla, Roger Solis
    Jafari, Roozbeh
    Mortazavi, Bobak J.
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2020, 24 (09) : 2639 - 2650
  • [42] Functional Annotations Improve the Predictive Score of Human Disease-Related Mutations in Proteins
    Calabrese, Remo
    Capriotti, Emidio
    Fariselli, Piero
    Martelli, Pier Luigi
    Casadio, Rita
    HUMAN MUTATION, 2009, 30 (08) : 1237 - 1244
  • [43] Human genome project: Revolutionizing biology through leveraging technology
    Dahl, CA
    Strausberg, RL
    ULTRASENSITIVE BIOCHEMICAL DIAGNOSTICS, PROCEEDINGS OF, 1996, 2680 : 190 - 201
  • [44] Feature decoupled knowledge distillation enabled lightweight image transmission through multimode fibers
    Li, Fujie
    Yao, Li
    Niu, Wenqing
    Li, Ziwei
    Shi, Jianyang
    Zhang, Junwen
    Shen, Chao
    Chi, Nan
    OPTICS EXPRESS, 2024, 32 (03) : 4201 - 4214
  • [45] Efficient image classification through collaborative knowledge distillation: A novel AlexNet modification approach
    Kuldashboy, Avazov
    Umirzakova, Sabina
    Allaberdiev, Sharofiddin
    Nasimov, Rashid
    Abdusalomov, Akmalbek
    Cho, Young Im
    HELIYON, 2024, 10 (14)
  • [46] Human Activity Recognition-Oriented Incremental Learning with Knowledge Distillation
    Chen, Caijuan
    Ota, Kaoru
    Dong, Mianxiong
    Yu, Chen
    Jin, Hai
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2021, 30 (06)
  • [47] MEDKD: Enhancing Medical Image Classification with Multiple Expert Decoupled Knowledge Distillation for Long-Tail Data
    Zhang, Fuheng
    Li, Sirui
    Wei, Tianyunxi
    Lin, Li
    Huang, Yijin
    Cheng, Pujin
    Tang, Xiaoying
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2023, PT II, 2024, 14349 : 314 - 324
  • [48] Progressive Cross-modal Knowledge Distillation for Human Action Recognition
    Ni, Jianyuan
    Ngu, Anne H. H.
    Yan, Yan
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5903 - 5912
  • [49] Multi-Objective Diverse Human Motion Prediction with Knowledge Distillation
    Ma, Hengbo
    Li, Jiachen
    Hosseini, Ramtin
    Tomizuka, Masayoshi
    Choi, Chiho
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8151 - 8161
  • [50] Class relationship-based knowledge distillation for efficient human parsing
    Lang, Yuqi
    Liu, Kunliang
    Wang, Jianming
    Hwang, Wonjun
    ELECTRONICS LETTERS, 2023, 59 (15)