Transformer-Based Deep Learning Network for Tooth Segmentation on Panoramic Radiographs

被引:0
|
作者
SHENG Chen [1 ,2 ]
WANG Lin [3 ,4 ,1 ]
HUANG Zhenhuan [3 ,4 ,1 ]
WANG Tian [3 ,4 ,1 ]
GUO Yalin [3 ,4 ,1 ]
HOU Wenjie [3 ,4 ,1 ]
XU Laiqing [3 ,4 ,1 ]
WANG Jiazhu [3 ,4 ,1 ]
YAN Xue [3 ,4 ,1 ]
机构
[1] Medical School of Chinese PLA
[2] Department of Stomatology, the first Medical Centre,Chinese PLA General Hospital
[3] Department of Stomatology, the First Medical Centre, Chinese PLA General Hospital
[4] Beihang University
关键词
D O I
暂无
中图分类号
R816.98 [口腔科]; TP18 [人工智能理论]; TP391.41 [];
学科分类号
080203 ; 081104 ; 0812 ; 0835 ; 1001 ; 100105 ; 100207 ; 100602 ; 1405 ;
摘要
Panoramic radiographs can assist dentist to quickly evaluate patients’ overall oral health status. The accurate detection and localization of tooth tissue on panoramic radiographs is the first step to identify pathology, and also plays a key role in an automatic diagnosis system. However,the evaluation of panoramic radiographs depends on the clinical experience and knowledge of dentist,while the interpretation of panoramic radiographs might lead misdiagnosis. Therefore, it is of great significance to use artificial intelligence to segment teeth on panoramic radiographs. In this study, SWinUnet, the transformer-based Ushaped encoder-decoder architecture with skip-connections, is introduced to perform panoramic radiograph segmentation. To well evaluate the tooth segmentation performance of SWin-Unet, the PLAGH-BH dataset is introduced for the research purpose. The performance is evaluated by F1 score, mean intersection and Union(IoU) and Acc, Compared with U-Net, LinkNet and FPN baselines, SWin-Unet performs much better in PLAGH-BH tooth segmentation dataset.These results indicate that SWin-Unet is more feasible on panoramic radiograph segmentation, and is valuable for the potential clinical application.
引用
下载
收藏
页码:257 / 272
页数:16
相关论文
共 50 条
  • [31] Transformer-based ripeness segmentation for tomatoes
    Shinoda, Risa
    Kataoka, Hirokatsu
    Hara, Kensho
    Noguchi, Ryozo
    SMART AGRICULTURAL TECHNOLOGY, 2023, 4
  • [32] TAGNet: A transformer-based axial guided network for bile duct segmentation
    Zhou, Guang-Quan
    Zhao, Fuxing
    Yang, Qing-Han
    Wang, Kai-Ni
    Li, Shengxiao
    Zhou, Shoujun
    Lu, Jian
    Chen, Yang
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [33] A transformer-based deep neural network model for SSVEP classification
    Chen, Jianbo
    Zhang, Yangsong
    Pan, Yudong
    Xu, Peng
    Guan, Cuntai
    NEURAL NETWORKS, 2023, 164 : 521 - 534
  • [34] Transformer-Based Cascade U-shaped Network for Action Segmentation
    Bao, Wenxia
    Lin, An
    Huang, Hua
    Yang, Xianjun
    Chen, Hemu
    2024 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MEDIA COMPUTING, ICIPMC 2024, 2024, : 157 - 161
  • [35] TransRender: a transformer-based boundary rendering segmentation network for stroke lesions
    Wu, Zelin
    Zhang, Xueying
    Li, Fenglian
    Wang, Suzhe
    Li, Jiaying
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [36] A transformer-based deep learning framework to predict employee attrition
    Li, Wenhui
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [37] Transformer-based deep learning model for forced oscillation localization
    Matar, Mustafa
    Estevez, Pablo Gill
    Marchi, Pablo
    Messina, Francisco
    Elmoudi, Ramadan
    Wshah, Safwan
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2023, 146
  • [38] Characterization of groundwater contamination: A transformer-based deep learning model
    Bai, Tao
    Tahmasebi, Pejman
    ADVANCES IN WATER RESOURCES, 2022, 164
  • [39] GIT: A Transformer-Based Deep Learning Model for Geoacoustic Inversion
    Feng, Sheng
    Zhu, Xiaoqian
    Ma, Shuqing
    Lan, Qiang
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (06)
  • [40] Transformer-Based Deep Learning Method for the Prediction of Ventilator Pressure
    Fan, Ruizhe
    2022 IEEE 2ND INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING (ICICSE 2022), 2022, : 25 - 28