Improving brain tumor segmentation with anatomical prior-informed pre-training

被引:0
|
作者
Wang, Kang [1 ,2 ]
Li, Zeyang [3 ]
Wang, Haoran [1 ,2 ]
Liu, Siyu [1 ,2 ]
Pan, Mingyuan [4 ]
Wang, Manning [1 ,2 ]
Wang, Shuo [1 ,2 ]
Song, Zhijian [1 ,2 ]
机构
[1] Fudan Univ, Digital Med Res Ctr, Sch Basic Med Sci, Shanghai, Peoples R China
[2] Fudan Univ, Shanghai Key Lab Med Image Comp & Comp Assisted In, Shanghai, Peoples R China
[3] Fudan Univ, Zhongshan Hosp, Dept Neurosurg, Shanghai, Peoples R China
[4] Fudan Univ, Huashan Hosp, Radiat Oncol Ctr, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
masked autoencoder; anatomical priors; transformer; brain tumor segmentation; magnetic resonance image; self-supervised learning; OPTIMIZATION; REGISTRATION; ROBUST;
D O I
10.3389/fmed.2023.1211800
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
IntroductionPrecise delineation of glioblastoma in multi-parameter magnetic resonance images is pivotal for neurosurgery and subsequent treatment monitoring. Transformer models have shown promise in brain tumor segmentation, but their efficacy heavily depends on a substantial amount of annotated data. To address the scarcity of annotated data and improve model robustness, self-supervised learning methods using masked autoencoders have been devised. Nevertheless, these methods have not incorporated the anatomical priors of brain structures.MethodsThis study proposed an anatomical prior-informed masking strategy to enhance the pre-training of masked autoencoders, which combines data-driven reconstruction with anatomical knowledge. We investigate the likelihood of tumor presence in various brain structures, and this information is then utilized to guide the masking procedure.ResultsCompared with random masking, our method enables the pre-training to concentrate on regions that are more pertinent to downstream segmentation. Experiments conducted on the BraTS21 dataset demonstrate that our proposed method surpasses the performance of state-of-the-art self-supervised learning techniques. It enhances brain tumor segmentation in terms of both accuracy and data efficiency.DiscussionTailored mechanisms designed to extract valuable information from extensive data could enhance computational efficiency and performance, resulting in increased precision. It's still promising to integrate anatomical priors and vision approaches.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Heterogeneous graph convolutional network pre-training as side information for improving recommendation
    Do, Phuc
    Pham, Phu
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (18): : 15945 - 15961
  • [42] Improving Short Answer Grading Using Transformer-Based Pre-training
    Sung, Chul
    Dhamecha, Tejas Indulal
    Mukhi, Nirmal
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2019), PT I, 2019, 11625 : 469 - 481
  • [43] Improving Image Representations via MoCo Pre-training for Multimodal CXR Classification
    Serra, Francesco Dalla
    Jacenkow, Grzegorz
    Deligianni, Fani
    Dalton, Jeff
    O'Neil, Alison Q.
    MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2022, 2022, 13413 : 623 - 635
  • [44] Pre-Training Model and Client Selection Optimization for Improving Federated Learning Efficiency
    Ge, Bingchen
    Zhou, Ying
    Xie, Liping
    Kou, Lirong
    2024 9TH INTERNATIONAL CONFERENCE ON ELECTRONIC TECHNOLOGY AND INFORMATION SCIENCE, ICETIS 2024, 2024, : 650 - 660
  • [45] Improving Bayesian regularization of ANN via pre-training with early-stopping
    Chan, ZSH
    Ngan, HW
    Rad, AB
    NEURAL PROCESSING LETTERS, 2003, 18 (01) : 29 - 34
  • [46] Improving the industrial defect recognition in radiographic testing by pre-training on medical radiographs
    Yu, Han
    Li, Xingjie
    Xie, Huasheng
    Li, Xinyue
    Hou, Chunyu
    NDT & E INTERNATIONAL, 2025, 149
  • [47] Heterogeneous graph convolutional network pre-training as side information for improving recommendation
    Phuc Do
    Phu Pham
    Neural Computing and Applications, 2022, 34 : 15945 - 15961
  • [48] Specialized Pre-Training of Neural Networks on Synthetic Data for Improving Paraphrase Generation
    O. H. Skurzhanskyi
    O. O. Marchenko
    A. V. Anisimov
    Cybernetics and Systems Analysis, 2024, 60 : 167 - 174
  • [49] Improving Source Code Pre-Training via Type-Specific Masking
    Zou, Wentao
    Li, Qi
    Li, Chuanyi
    Ge, Jidong
    Chen, Xiang
    Huang, Liguo
    Luo, Bin
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (03)
  • [50] Specialized Pre-Training of Neural Networks on Synthetic Data for Improving Paraphrase Generation
    Skurzhanskyi, O. H.
    Marchenko, O. O.
    Anisimov, A. V.
    CYBERNETICS AND SYSTEMS ANALYSIS, 2024, 60 (02) : 167 - 174