Auto-segmentation method based on deep learning for the knee joint in MR images

被引:0
|
作者
Yu N. [1 ,2 ]
Liu J. [1 ,2 ]
Gao L. [1 ,2 ]
Sun Z. [3 ]
Han J. [1 ,2 ]
机构
[1] College of Artificial Intelligence, Nankai University, Tianjin
[2] Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin
[3] Institute of Sports Medicine, Peking University Third Hospital, Beijing
关键词
Convolutional neural network(CNN); Deep learning; Knee joint; Magnetic resonance images; Medical image segmentation;
D O I
10.19650/j.cnki.cjsi.J2006199
中图分类号
学科分类号
摘要
Auto-segmentation of the knee joint in magnetic resonance (MR) images is significant for clinical requirements. However, it is challenging due to that the segmentation targets have dramatically different sizes. In this study, an end-to-end DRD U-Net is proposed, which is based on the deep learning framework. The residual module is used as the basic module in the U-Net model, which increases the ability of reusing feature maps. The parallel dilated convolution modules are used to achieve different receptive fields, which can overcome the limitations of single receptive field in the U-Net model and effectively improve the segmentation capability with targets of different sizes. The multi-output fusion deep supervision module is designed to directly utilize the feature maps of different levels. In this way, the information complementarity is obtained, the consistency and accuracy of the segmented regions are improved. The proposed algorithm is evaluated by using the public OAI-ZIB data set. The average segmented surface distance is 0.2 mm, the root mean square surface distance is 0.43 mm, the Hausdorff distance is 5.22 mm, the average dice similarity coefficient (DSC) is 93.05%, and the volume overlap error is 3.86%. Compared with the conventional U-Net and other currently available models, the proposed DRD U-Net has better segmentation accuracy. © 2020, Science Press. All right reserved.
引用
收藏
页码:140 / 149
页数:9
相关论文
共 27 条
  • [1] XUE Q Y, WANG K ZH, PEI F X, Et al., The survey of the prevalence of primary osteoarthritis in the population aged 40 years and over in China, Chinese Journal of Orthopaedics, 35, 12, pp. 1206-1212, (2015)
  • [2] LI X, MAJUMDAR S., Quantitative MRI of articular cartilage and its clinical applications, Journal of Magnetic Resonance Imaging, 38, 5, pp. 991-1008, (2013)
  • [3] LEI J T, TANG M Y, WANG J CH, Et al., Review of the preoperative planning of robot assisted knee arthropl-asty, Journal of Mechanical Engineering, 53, 17, pp. 78-91, (2017)
  • [4] HEIMANN T, MORRISON B J, STYNER M A, Et al., Segmentation of knee images: A grand challenge, Proceeding MICCAI Workshop on Medical Image Analysis for the Clinic, pp. 207-214, (2010)
  • [5] TAMEZ-PENA J G, FARBER J, GONZALEA P C, Et al., Unsupervised segmentation and quantification of anatomical knee features: data from the osteoarthritis initiative, IEEE Transactions on Biomedical Engineering, 59, 4, pp. 1177-1186, (2012)
  • [6] SEIM H, KAINMUELLER D, LAMECKER H, Et al., Model-based auto-segmentation of knee bones and cartilage in MRI data, Proceeding Medical Image Analysis for the Clinic: A Grand Challenge, in conjunction with MICCAI, pp. 215-223, (2010)
  • [7] ZHANG K, LU W, MARZILANO P., Automatic knee cartilage segmentation from multi-contrast MR images using support vector machine classification with spatial dependencies, Magnetic Resonance Imaging, 31, 10, pp. 1731-1743, (2013)
  • [8] COURTIOL P, MAUSSION C, MOARII M, Et al., Deep learning-based classification of mesothelioma improves prediction of patient outcome, Nature Medicine, 25, 10, pp. 1519-1525, (2019)
  • [9] DAI X K, WANG X SH, DU L H, Et al., Automatic segmentation of head and neck organs at risk based on three-dimensional U-NET deep convolutional neural network, Journal of Biomedical Engineering, 37, 1, pp. 136-141, (2020)
  • [10] PRASOON A, PETERSEN K, IGEL C, Et al., Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network, International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 246-253, (2013)