Fully automated longitudinal segmentation of new or enlarged multiple sclerosis lesions using 3D convolutional neural networks

被引:32
|
作者
Krueger, Julia [1 ]
Opfer, Roland [1 ]
Gessert, Nils [2 ]
Ostwaldt, Ann-Christin [1 ]
Manogaran, Praveena [3 ,4 ,7 ]
Kitzler, Hagen H. [8 ]
Schlaefer, Alexander [2 ]
Schippling, Sven [3 ,4 ,5 ,6 ]
机构
[1] Jung Diagnost GmbH, Hamburg, Germany
[2] Hamburg Univ Technol, Inst Med Technol, Hamburg, Germany
[3] Univ Hosp Zurich, Neuroimmunol & Multiple Sclerosis Res, Dept Neurol, Zurich, Switzerland
[4] Univ Zurich, Zurich, Switzerland
[5] Univ Zurich, Neurosci Ctr Zurich, Zurich, Switzerland
[6] Fed Inst Technol ETH, Zurich, Switzerland
[7] Swiss Fed Inst Technol, Dept Informat Technol & Elect Engn, Zurich, Switzerland
[8] Tech Univ Dresden, Inst Diagnost & Intervent Neuroradiol, Univ Hosp Carl Gustav Carus, Dresden, Germany
关键词
Multiple sclerosis; Lesion activity; Convolutional neural network; U-net; Lesion segmentation; WHITE-MATTER LESIONS; BRAIN MRI; SUBTRACTION; TOOL;
D O I
10.1016/j.nicl.2020.102445
中图分类号
R445 [影像诊断学];
学科分类号
100207 ;
摘要
The quantification of new or enlarged lesions from follow-up MRI scans is an important surrogate of clinical disease activity in patients with multiple sclerosis (MS). Not only is manual segmentation time consuming, but inter-rater variability is high. Currently, only a few fully automated methods are available. We address this gap in the field by employing a 3D convolutional neural network (CNN) with encoder-decoder architecture for fully automatic longitudinal lesion segmentation. Input data consist of two fluid attenuated inversion recovery (FLAIR) images (baseline and follow-up) per patient. Each image is entered into the encoder and the feature maps are concatenated and then fed into the decoder. The output is a 3D mask indicating new or enlarged lesions (compared to the baseline scan). The proposed method was trained on 1809 single point and 1444 longitudinal patient data sets and then validated on 185 independent longitudinal data sets from two different scanners. From the two validation data sets, manual segmentations were available from three experienced raters, respectively. The performance of the proposed method was compared to the open source Lesion Segmentation Toolbox (LST), which is a current state-of-art longitudinal lesion segmentation method. The mean lesion-wise inter-rater sensitivity was 62%, while the mean inter-rater number of false positive (FP) findings was 0.41 lesions per case. The two validated algorithms showed a mean sensitivity of 60% (CNN), 46% (LST) and a mean FP of 0.48 (CNN), 1.86 (LST) per case. Sensitivity and number of FP were not significantly different (p < 0.05) between the CNN and manual raters. New or enlarged lesions counted by the CNN algorithm appeared to be comparable with manual expert ratings. The proposed algorithm seems to outperform currently available approaches, particularly LST. The high inter-rater variability in case of manual segmentation indicates the complexity of identifying new or enlarged lesions. An automated CNN-based approach can quickly provide an independent and deterministic assessment of new or enlarged lesions from baseline to follow-up scans with acceptable reliability.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Fully Automated 3D Cardiac MRI Localisation and Segmentation Using Deep Neural Networks
    Vesal, Sulaiman
    Maier, Andreas
    Ravikumar, Nishant
    [J]. JOURNAL OF IMAGING, 2020, 6 (07)
  • [22] Fully automated 3D segmentation and separation of multiple cervical vertebrae in CT images using a 2D convolutional neural network
    Bae, Hyun-Jin
    Hyun, Heejung
    Byeon, Younghwa
    Shin, Keewon
    Cho, Yongwon
    Song, Young Ji
    Yi, Seong
    Kuh, Sung-Uk
    Yeom, Jin S.
    Kim, Namkug
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2020, 184
  • [23] Longitudinal Multiple Sclerosis Lesion Segmentation Using Multi-view Convolutional Neural Networks
    Birenbaum, Ariel
    Greenspan, Hayit
    [J]. DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS, 2016, 10008 : 58 - 67
  • [24] Correction to: Fully automated body composition analysis in routine CT imaging using 3D semantic segmentation convolutional neural networks
    Sven Koitka
    Lennard Kroll
    Eugen Malamutmann
    Arzu Oezcelik
    Felix Nensa
    [J]. European Radiology, 2021, 31 : 4402 - 4403
  • [25] Combining Fully Convolutional and Recurrent Neural Networks for 3D Biomedical Image Segmentation
    Chen, Jianxu
    Yang, Lin
    Zhang, Yizhe
    Alber, Mark
    Chen, Danny Z.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [26] Segmentation of tomography datasets using 3D convolutional neural networks
    James, Jim
    Pruyne, Nathan
    Stan, Tiberiu
    Schwarting, Marcus
    Yeom, Jiwon
    Hong, Seungbum
    Voorhees, Peter
    Blaiszik, Ben
    Foster, Ian
    [J]. COMPUTATIONAL MATERIALS SCIENCE, 2023, 216
  • [27] Automated segmentation of the knee for age assessment in 3D MR images using convolutional neural networks
    Paul-Louis Pröve
    Eilin Jopp-van Well
    Ben Stanczus
    Michael M. Morlock
    Jochen Herrmann
    Michael Groth
    Dennis Säring
    Markus Auf der Mauer
    [J]. International Journal of Legal Medicine, 2019, 133 : 1191 - 1205
  • [28] Automated segmentation of the knee for age assessment in 3D MR images using convolutional neural networks
    Proeve, Paul-Louis
    Jopp-van Well, Eilin
    Stanczus, Ben
    Morlock, Michael M.
    Herrmann, Jochen
    Groth, Michael
    Saering, Dennis
    der Mauer, Markus Auf
    [J]. INTERNATIONAL JOURNAL OF LEGAL MEDICINE, 2019, 133 (04) : 1191 - 1205
  • [29] Fully automated segmentation of multiple sclerosis lesions in multispectral MRI
    Wels M.
    Huber M.
    Hornegger J.
    [J]. Pattern Recogn. Image Anal., 2008, 2 (347-350): : 347 - 350
  • [30] Fully automated 2D and 3D convolutional neural networks pipeline for video segmentation and myocardial infarction detection in echocardiography
    Oumaima Hamila
    Sheela Ramanna
    Christopher J. Henry
    Serkan Kiranyaz
    Ridha Hamila
    Rashid Mazhar
    Tahir Hamid
    [J]. Multimedia Tools and Applications, 2022, 81 : 37417 - 37439