Deep learning;
U-Net;
Transfer learning;
Iterative Refinement;
Semantic Segmentation;
Across modalities;
Weak labels;
D O I:
10.1007/978-3-031-48593-0_3
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Medical image segmentation is indicated in a number of treatments and procedures, such as detecting pathological changes and organ resection. However, it is a time-consuming process when done manually. Automatic segmentation algorithms like deep learning methods overcome this hurdle, but they are data-hungry and require expert ground-truth annotations, which is a limitation, particularly in medical datasets. On the other hand, unannotated medical datasets are easier to come by and can be used in several methods to learn ground-truth masks. In this paper, we aim to utilize across-modalities transfer learning to leverage the knowledge learned on a large publicly available and expertly annotated computed tomography (CT) dataset to a small unannotated dataset in a different modality magnetic resonance (MR). Moreover, we prove that quickly generated weak annotations can be improved iteratively using a pre-trained U-Net model and will approach the ground truth masks through iterations. This methodology was proven qualitatively using an in-house MR dataset where professionals were asked to choose between model output and weak annotations. They chose model output 93% similar to 94% of the time. Moreover, we prove it quantitatively using the publicly available annotated Combined (CT-MR) Healthy Abdominal Organ Segmentation (CHAOS) dataset. The weak annotation showed improvements across three iterations from 87.5% to 92.2% Dice score when compared to the ground truth annotations.