Improving brain tumor segmentation with anatomical prior-informed pre-training

被引:0
|
作者
Wang, Kang [1 ,2 ]
Li, Zeyang [3 ]
Wang, Haoran [1 ,2 ]
Liu, Siyu [1 ,2 ]
Pan, Mingyuan [4 ]
Wang, Manning [1 ,2 ]
Wang, Shuo [1 ,2 ]
Song, Zhijian [1 ,2 ]
机构
[1] Fudan Univ, Digital Med Res Ctr, Sch Basic Med Sci, Shanghai, Peoples R China
[2] Fudan Univ, Shanghai Key Lab Med Image Comp & Comp Assisted In, Shanghai, Peoples R China
[3] Fudan Univ, Zhongshan Hosp, Dept Neurosurg, Shanghai, Peoples R China
[4] Fudan Univ, Huashan Hosp, Radiat Oncol Ctr, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
masked autoencoder; anatomical priors; transformer; brain tumor segmentation; magnetic resonance image; self-supervised learning; OPTIMIZATION; REGISTRATION; ROBUST;
D O I
10.3389/fmed.2023.1211800
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
IntroductionPrecise delineation of glioblastoma in multi-parameter magnetic resonance images is pivotal for neurosurgery and subsequent treatment monitoring. Transformer models have shown promise in brain tumor segmentation, but their efficacy heavily depends on a substantial amount of annotated data. To address the scarcity of annotated data and improve model robustness, self-supervised learning methods using masked autoencoders have been devised. Nevertheless, these methods have not incorporated the anatomical priors of brain structures.MethodsThis study proposed an anatomical prior-informed masking strategy to enhance the pre-training of masked autoencoders, which combines data-driven reconstruction with anatomical knowledge. We investigate the likelihood of tumor presence in various brain structures, and this information is then utilized to guide the masking procedure.ResultsCompared with random masking, our method enables the pre-training to concentrate on regions that are more pertinent to downstream segmentation. Experiments conducted on the BraTS21 dataset demonstrate that our proposed method surpasses the performance of state-of-the-art self-supervised learning techniques. It enhances brain tumor segmentation in terms of both accuracy and data efficiency.DiscussionTailored mechanisms designed to extract valuable information from extensive data could enhance computational efficiency and performance, resulting in increased precision. It's still promising to integrate anatomical priors and vision approaches.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Unsupervised Pre-Training for 3D Leaf Instance Segmentation
    Roggiolani, Gianmarco
    Magistri, Federico
    Guadagnino, Tiziano
    Behley, Jens
    Stachniss, Cyrill
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (11) : 7448 - 7455
  • [32] Anatomical Structure-Guided Medical Vision-Language Pre-training
    Li, Qingqiu
    Yan, Xiaohan
    Xu, Jilan
    Yuan, Runtian
    Zhang, Yuejie
    Feng, Rui
    Shen, Quanli
    Zhang, Xiaobo
    Wang, Shujun
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XI, 2024, 15011 : 80 - 90
  • [33] BrainNPT: Pre-Training Transformer Networks for Brain Network Classification
    Hu, Jinlong
    Huang, Yangmin
    Wang, Nan
    Dong, Shoubin
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2024, 32 : 2727 - 2736
  • [34] Improving generalization through self-supervised learning using generative pre-training transformer for natural gas segmentation
    Santos, Luiz Fernando Trindade
    Gattass, Marcelo
    Rodriguez, Carlos
    Hurtado, Jan
    Miranda, Frederico
    Michelon, Diogo
    Ribeiro, Roberto
    COMPUTERS & GEOSCIENCES, 2025, 196
  • [35] Improving Knowledge Graph Representation Learning by Structure Contextual Pre-training
    Ye, Ganqiang
    Zhang, Wen
    Bi, Zhen
    Wong, Chi Man
    Chen, Hui
    Chen, Huajun
    PROCEEDINGS OF THE 10TH INTERNATIONAL JOINT CONFERENCE ON KNOWLEDGE GRAPHS (IJCKG 2021), 2021, : 151 - 155
  • [36] Improving Information Extraction on Business Documents with Specific Pre-training Tasks
    Douzon, Thibault
    Duffner, Stefan
    Garcia, Christophe
    Espinas, Jeremy
    DOCUMENT ANALYSIS SYSTEMS, DAS 2022, 2022, 13237 : 111 - 125
  • [37] IMITATE: Clinical Prior Guided Hierarchical Vision-Language Pre-Training
    Liu, Che
    Cheng, Sibo
    Shi, Miaojing
    Shah, Anand
    Bai, Wenjia
    Arcucci, Rossella
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2025, 44 (01) : 519 - 529
  • [38] Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images
    Ghassemi, Navid
    Shoeibi, Afshin
    Rouhani, Modjtaba
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2020, 57
  • [39] Cherry growth modeling based on Prior Distance Embedding contrastive learning: Pre-training, anomaly detection, semantic segmentation, and temporal modeling
    Xu, Wei
    Guo, Ruiya
    Chen, Pengyu
    Li, Li
    Gu, Maomao
    Sun, Hao
    Hu, Lingyan
    Wang, Zumin
    Li, Kefeng
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 221
  • [40] A knowledge-guided pre-training framework for improving molecular representation learning
    Li, Han
    Zhang, Ruotian
    Min, Yaosen
    Ma, Dacheng
    Zhao, Dan
    Zeng, Jianyang
    NATURE COMMUNICATIONS, 2023, 14 (01)