Comparison of Multi-Label U-Net and Mask R-CNN for panoramic radiograph segmentation to detect periodontitis

被引:20
|
作者
Widyaningrum, Rini [1 ]
Candradewi, Ika [2 ]
Aji, Nur Rahman Ahmad Seno [3 ]
Aulianisa, Rona [4 ]
机构
[1] Univ Gadjah Mada, Fac Dent, Dept Dentomaxillofacial Radiol, Yogyakarta 55281, Indonesia
[2] Univ Gadjah Mada, Fac Math & Nat Sci, Dept Comp Sci & Elect, Yogyakarta, Indonesia
[3] Univ Gadjah Mada, Fac Dent, Dept Periodont, Yogyakarta, Indonesia
[4] Univ Gadjah Mada, Fac Dent, Yogyakarta, Indonesia
关键词
Radiography; Panoramic; Deep Learning; Periodontitis; Tooth; ARTIFICIAL-INTELLIGENCE;
D O I
10.5624/isd.20220105
中图分类号
R78 [口腔科学];
学科分类号
1003 ;
摘要
Purpose: Periodontitis, the most prevalent chronic inflammatory condition affecting teeth-supporting tissues, is diagnosed and classified through clinical and radiographic examinations. The staging of periodontitis using panoramic radiographs provides information for designing computer-assisted diagnostic systems. Performing image segmentation in periodontitis is required for image processing in diagnostic applications. This study evaluated image segmentation for periodontitis staging based on deep learning approaches. Materials and Methods: Multi-Label U-Net and Mask R-CNN models were compared for image segmentation to detect periodontitis using 100 digital panoramic radiographs. Normal conditions and 4 stages of periodontitis were annotated on these panoramic radiographs. A total of 1100 original and augmented images were then randomly divided into a training (75%) dataset to produce segmentation models and a testing (25%) dataset to determine the evaluation metrics of the segmentation models. Results: The performance of the segmentation models against the radiographic diagnosis of periodontitis conducted by a dentist was described by evaluation metrics (i.e., dice coefficient and intersection-over-union [IoU] score). MultiLabel U-Net achieved a dice coefficient of 0.96 and an IoU score of 0.97. Meanwhile, Mask R-CNN attained a dice coefficient of 0.87 and an IoU score of 0.74. U-Net showed the characteristic of semantic segmentation, and Mask R-CNN performed instance segmentation with accuracy, precision, recall, and F1-score values of 95%, 85.6%, 88.2%, and 86.6%, respectively. Conclusion: Multi-Label U-Net produced superior image segmentation to that of Mask R-CNN. The authors recommend integrating it with other techniques to develop hybrid models for automatic periodontitis detection. (Imaging Sci Dent 2022; 52: 383-91)
引用
收藏
页码:383 / 391
页数:9
相关论文
共 42 条
  • [31] Automatic Surgical Instrument Recognition-A Case of Comparison Study between the Faster R-CNN, Mask R-CNN, and Single-Shot Multi-Box Detectors
    Lee, Jiann-Der
    Chien, Jong-Chih
    Hsu, Yu-Tsung
    Wu, Chieh-Tsai
    APPLIED SCIENCES-BASEL, 2021, 11 (17):
  • [32] Multi-Species Individual Tree Segmentation and Identification Based on Improved Mask R-CNN and UAV Imagery in Mixed Forests
    Zhang, Chong
    Zhou, Jiawei
    Wang, Huiwen
    Tan, Tianyi
    Cui, Mengchen
    Huang, Zilu
    Wang, Pei
    Zhang, Li
    REMOTE SENSING, 2022, 14 (04)
  • [33] Segmentation of the Aorta and Pulmonary Arteries Based on 4D Flow MRI in the Pediatric Setting Using Fully Automated Multi-Site, Multi-Vendor, and Multi-Label Dense U-Net
    Fujiwara, Takashi
    Berhane, Haben
    Scott, Michael B.
    Englund, Erin K.
    Schafer, Michal
    Fonseca, Brian
    Berthusen, Alexander
    Robinson, Joshua D.
    Rigsby, Cynthia K.
    Browne, Lorna P.
    Markl, Michael
    Barker, Alex J.
    JOURNAL OF MAGNETIC RESONANCE IMAGING, 2022, 55 (06) : 1666 - 1680
  • [34] Deep Multi-Scale U-Net Architecture and Label-Noise Robust Training Strategies for Histopathological Image Segmentation
    Kurian, Nikhil Cherian
    Lohan, Amit
    Verghese, Gregory
    Dharamshi, Nimish
    Meena, Swati
    Li, Mengyuan
    Liu, Fangfang
    Gillet, Cheryl
    Rane, Swapnil
    Grigoriadis, Anita
    Sethi, Amit
    2022 IEEE 22ND INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE 2022), 2022, : 91 - 96
  • [35] An Accurate and Real-time Method of Self-blast Glass Insulator Location Based on Faster R-CNN and U-net with Aerial Images
    Ling, Zenan
    Zhang, Dongxia
    Qiu, Robert C.
    Jin, Zhijian
    Zhang, Yuhang
    He, Xing
    Liu, Haichun
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2019, 5 (04): : 474 - 482
  • [36] Intelligent Machine Learning Based Brain Tumor Segmentation through Multi-Layer Hybrid U-Net with CNN Feature Integration
    Malebary, Sharaf J.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 79 (01): : 1301 - 1317
  • [37] Pelvic U-Net: multi-label semantic segmentation of pelvic organs at risk for radiation therapy anal cancer patients using a deeply supervised shuffle attention convolutional neural network
    Michael Lempart
    Martin P. Nilsson
    Jonas Scherman
    Christian Jamtheim Gustafsson
    Mikael Nilsson
    Sara Alkner
    Jens Engleson
    Gabriel Adrian
    Per Munck af Rosenschöld
    Lars E. Olsson
    Radiation Oncology, 17
  • [38] Pelvic U-Net: multi-label semantic segmentation of pelvic organs at risk for radiation therapy anal cancer patients using a deeply supervised shuffle attention convolutional neural network
    Lempart, Michael
    Nilsson, Martin P.
    Scherman, Jonas
    Gustafsson, Christian Jamtheim
    Nilsson, Mikael
    Alkner, Sara
    Engleson, Jens
    Adrian, Gabriel
    Munck af Rosenschold, Per
    Olsson, Lars E.
    RADIATION ONCOLOGY, 2022, 17 (01)
  • [39] Editorial for "Segmentation of the Aorta and Pulmonary Arteries Based on 4D Flow MRI in the Pediatric Setting Using Fully Automated Multi-Site, Multi-Vendor, and Multi-Label Dense U-Net"
    Trenti, Chiara
    Dyverfeldt, Petter
    JOURNAL OF MAGNETIC RESONANCE IMAGING, 2022, 55 (06) : 1681 - 1682
  • [40] Comparison of Multi-atlas Segmentation and U-Net Approaches for Automated 3D Liver Delineation in MRI
    Owler, James
    Irving, Ben
    Ridgeway, Ged
    Wojciechowska, Marta
    McGonigle, John
    Brady, Sir Michael
    MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2019, 2020, 1065 : 478 - 488