Automated segmentation of an intensity calibration phantom in clinical CT images using a convolutional neural network

被引:10
|
作者
Uemura, Keisuke [1 ,2 ]
Otake, Yoshito [1 ]
Takao, Masaki [3 ]
Soufi, Mazen [1 ]
Kawasaki, Akihiro [1 ]
Sugano, Nobuhiko [2 ]
Sato, Yoshinobu [1 ]
机构
[1] Nara Inst Sci & Technol, Div Informat Sci, Grad Sch Sci & Technol, Ikoma, Nara, Japan
[2] Osaka Univ, Grad Sch Med, Dept Orthopaed Med Engn, Suita, Osaka, Japan
[3] Osaka Univ, Dept Orthopaed, Grad Sch Med, Suita, Osaka, Japan
基金
日本学术振兴会;
关键词
Artificial intelligence; Bone mineral density; Deep learning; Quantitative computed tomography; Phantom segmentation; U-net; BONE; ASSOCIATION; STRENGTH; DENSITY;
D O I
10.1007/s11548-021-02345-w
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Purpose In quantitative computed tomography (CT), manual selection of the intensity calibration phantom's region of interest is necessary for calculating density (mg/cm(3)) from the radiodensity values (Hounsfield units: HU). However, as this manual process requires effort and time, the purposes of this study were to develop a system that applies a convolutional neural network (CNN) to automatically segment intensity calibration phantom regions in CT images and to test the system in a large cohort to evaluate its robustness. Methods This cross-sectional, retrospective study included 1040 cases (520 each from two institutions) in which an intensity calibration phantom (B-MAS200, Kyoto Kagaku, Kyoto, Japan) was used. A training dataset was created by manually segmenting the phantom regions for 40 cases (20 cases for each institution). The CNN model's segmentation accuracy was assessed with the Dice coefficient, and the average symmetric surface distance was assessed through fourfold cross-validation. Further, absolute difference of HU was compared between manually and automatically segmented regions. The system was tested on the remaining 1000 cases. For each institution, linear regression was applied to calculate the correlation coefficients between HU and phantom density. Results The source code and the model used for phantom segmentation can be accessed at https://github.com/keisuke-uemura/CT-Intensity-Calibration-Phantom-Segmentation. The median Dice coefficient was 0.977, and the median average symmetric surface distance was 0.116 mm. The median absolute difference of the segmented regions between manual and automated segmentation was 0.114 HU. For the test cases, the median correlation coefficients were 0.9998 and 0.999 for the two institutions, with a minimum value of 0.9863. Conclusion The proposed CNN model successfully segmented the calibration phantom regions in CT images with excellent accuracy.
引用
收藏
页码:1855 / 1864
页数:10
相关论文
共 50 条
  • [31] Fully automated 3D segmentation and separation of multiple cervical vertebrae in CT images using a 2D convolutional neural network
    Bae, Hyun-Jin
    Hyun, Heejung
    Byeon, Younghwa
    Shin, Keewon
    Cho, Yongwon
    Song, Young Ji
    Yi, Seong
    Kuh, Sung-Uk
    Yeom, Jin S.
    Kim, Namkug
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2020, 184
  • [32] Deep Learning in CT Images: Automated Pulmonary Nodule Detection for Subsequent Management Using Convolutional Neural Network
    Xu, Yi-Ming
    Zhang, Teng
    Xu, Hai
    Qi, Liang
    Zhang, Wei
    Zhang, Yu-Dong
    Gao, Da-Shan
    Yuan, Mei
    Yu, Tong-Fu
    CANCER MANAGEMENT AND RESEARCH, 2020, 12 : 2979 - 2992
  • [33] Automated segmentation of dermal fillers in OCT images of mice using convolutional neural networks
    Pfister, Martin
    Schuetzenberger, Kornelia
    Pfeiffenberger, Ulrike
    Messner, Alina
    Chen, Zhe
    dos Santos, Valentin Aranha
    Puchner, Stefan
    Garhoefer, Gerhard
    Schmetterer, Leopold
    Groeschl, Martin
    Werkmeister, Rene M.
    BIOMEDICAL OPTICS EXPRESS, 2019, 10 (03) : 1315 - 1328
  • [34] Segmentation of intervertebral disks from videofluorographic images using convolutional neural network
    Fujinaka, Ayano
    Saito, Yuki
    Mekata, Kojiro
    Takizawa, Hotaka
    Kudo, Hiroyuki
    INTERNATIONAL FORUM ON MEDICAL IMAGING IN ASIA 2019, 2019, 11050
  • [35] Convolutional Neural Network for Automated FLAIR Lesion Segmentation on Clinical Brain MR Imaging
    Duong, M. T.
    Rudie, J. D.
    Wang, J.
    Xie, L.
    Mohan, S.
    Gee, J. C.
    Rauschecker, A. M.
    AMERICAN JOURNAL OF NEURORADIOLOGY, 2019, 40 (08) : 1282 - 1290
  • [36] Segmentation of uterus and placenta in MR images using a fully convolutional neural network
    Shahedi, Maysam
    Dormer, James D.
    Devi, Anusha T. T.
    Do, Quyen N.
    Xi, Yin
    Lewis, Matthew A.
    Madhuranthakam, Ananth J.
    Twickler, Diane M.
    Fei, Baowei
    MEDICAL IMAGING 2020: COMPUTER-AIDED DIAGNOSIS, 2020, 11314
  • [37] Skin Lesion Segmentation from Dermoscopic Images Using Convolutional Neural Network
    Zafar, Kashan
    Gilani, Syed Omer
    Waris, Asim
    Ahmed, Ali
    Jamil, Mohsin
    Khan, Muhammad Nasir
    Kashif, Amer Sohail
    SENSORS, 2020, 20 (06)
  • [38] Lumen Segmentation in Optical Coherence Tomography Images using Convolutional Neural Network
    Miyagawa, M.
    Costa, M. G. F.
    Gutierrez, M. A.
    Costa, J. P. G. F.
    Costa Filho, C. F. F.
    2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2018, : 600 - 603
  • [39] Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network
    Cho, Bum-Joo
    Bang, Chang Seok
    Park, Se Woo
    Yang, Young Joo
    Seo, Seung In
    Lim, Hyun
    Shin, Woon Geon
    Hong, Ji Taek
    Yoo, Yong Tak
    Hong, Seok Hwan
    Choi, Jae Ho
    Lee, Jae Jun
    Baik, Gwang Ho
    ENDOSCOPY, 2019, 51 (12) : 1121 - 1129
  • [40] Automated Classification of Oral Cancer Histopathology images using Convolutional Neural Network
    Panigrahi, Santisudha
    Swarnkar, Tripti
    2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2019, : 1232 - 1234