Improving Semantic Segmentation via Efficient Self-Training

被引:38
|
作者
Zhu, Yi [1 ]
Zhang, Zhongyue [2 ]
Wu, Chongruo [3 ]
Zhang, Zhi [1 ]
He, Tong [1 ]
Zhang, Hang [4 ]
Manmatha, R. [1 ]
Li, Mu [1 ]
Smola, Alexander [1 ]
机构
[1] Amazon Web Serv, Santa Clara, CA 95054 USA
[2] Snapchat, Sunnyvale, CA 94085 USA
[3] Univ Calif Davis, Davis, CA 95616 USA
[4] Facebook, Menlo Pk, CA 94025 USA
基金
澳大利亚研究理事会;
关键词
Training; Semantics; Computational modeling; Image segmentation; Data models; Schedules; Predictive models; Semantic segmentation; semi-supervised learning; self-training; fast training schedule; cross-domain generalization;
D O I
10.1109/TPAMI.2021.3138337
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Starting from the seminal work of Fully Convolutional Networks (FCN), there has been significant progress on semantic segmentation. However, deep learning models often require large amounts of pixelwise annotations to train accurate and robust models. Given the prohibitively expensive annotation cost of segmentation masks, we introduce a self-training framework in this paper to leverage pseudo labels generated from unlabeled data. In order to handle the data imbalance problem of semantic segmentation, we propose a centroid sampling strategy to uniformly select training samples from every class within each epoch. We also introduce a fast training schedule to alleviate the computational burden. This enables us to explore the usage of large amounts of pseudo labels. Our Centroid Sampling based Self-Training framework (CSST) achieves state-of-the-art results on Cityscapes and CamVid datasets. On PASCAL VOC 2012 test set, our models trained with the original train set even outperform the same models trained on the much bigger augmented train set. This indicates the effectiveness of CSST when there are fewer annotations. We also demonstrate promising few-shot generalization capability from Cityscapes to BDD100K and from Cityscapes to Mapillary datasets.
引用
收藏
页码:1589 / 1602
页数:14
相关论文
共 50 条
  • [41] Improving Disfluency Detection by Self-Training a Self-Attentive Model
    Lou, Paria Jamshid
    Johnson, Mark
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3754 - 3763
  • [42] Fast and Robust Self-Training Beard/Moustache Detection and Segmentation
    Le, T. Hoang Ngan
    Luu, Khoa
    Savvides, Marios
    2015 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2015, : 507 - 512
  • [43] Enhancing Counterfactual Classification Performance via Self-Training
    Gao, Ruijiang
    Biggs, Max
    Sun, Wei
    Han, Ligong
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6665 - 6673
  • [44] SELF-TRAINING CLASSIFIER VIA LOCAL LEARNING REGULARIZATION
    Cheng, Yong
    Zhao, Ruilian
    PROCEEDINGS OF 2009 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-6, 2009, : 454 - 459
  • [45] Adversarial Domain Adaptation Enhanced via Self-training
    Altinel, Fazil
    Akkaya, Ibrahim Batuhan
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [46] SAM-guided contrast based self-training for source-free cross-domain semantic segmentation
    Ren, Qinghua
    Hou, Ke
    Zhan, Yongzhao
    Wang, Chen
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [47] The GIST and RIST of Iterative Self-Training for Semi-Supervised Segmentation
    Teh, Eu Wern
    DeVries, Terrance
    Duke, Brendan
    Jiang, Ruowei
    Aarabi, Parham
    Taylor, Graham W.
    2022 19TH CONFERENCE ON ROBOTS AND VISION (CRV 2022), 2022, : 58 - 66
  • [48] SELF-TRAINING WITH HIGH-DIMENSIONAL MARKERS FOR CELL INSTANCE SEGMENTATION
    Lo, Kuang-Cheng
    Lin, Cheng-Wei
    Lee, Hsin-Ying
    Hsu, Hao
    Hsu, Winston H.
    Su, Tung-Hung
    Chen, Shih-Yu
    Jeng, Yung-Ming
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [49] Self-training ABS
    Akhmetshin, A.M.
    Avtomobil'naya Promyshlennost, 2001, (06): : 34 - 36
  • [50] Self-training: A survey
    Amini, Massih-Reza
    Feofanov, Vasilii
    Pauletto, Loic
    Hadjadj, Lies
    Devijver, Emilie
    Maximov, Yury
    NEUROCOMPUTING, 2025, 616