Knowledge Adaptation for Efficient Semantic Segmentation

被引:146
|
作者
He, Tong [1 ]
Shen, Chunhua [1 ]
Tian, Zhi [1 ]
Gong, Dong [1 ]
Sun, Changming [2 ]
Yan, Youliang [3 ]
机构
[1] Univ Adelaide, Adelaide, SA, Australia
[2] CSIRO, Data61, Canberra, ACT, Australia
[3] Huawei Technol, Noahs Ark Lab, Hong Kong, Peoples R China
来源
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) | 2019年
关键词
D O I
10.1109/CVPR.2019.00067
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Both accuracy and efficiency are of significant importance to the task of semantic segmentation. Existing deep FCNs suffer from heavy computations due to a series of high-resolution feature maps for preserving the detailed knowledge in dense estimation. Although reducing the feature map resolution (i.e., applying a large overall stride) via subsampling operations (e.g., polling and convolution striding) can instantly increase the efficiency, it dramatically decreases the estimation accuracy. To tackle this dilemma, we propose a knowledge distillation method tailored for semantic segmentation to improve the performance of the compact FCNs with large overall stride. To handle the inconsistency between the features of the student and teacher network, we optimize the feature similarity in a transferred latent domain formulated by utilizing a pre-trained autoencoder. Moreover, an affinity distillation module is proposed to capture the long-range dependency by calculating the non-local interactions across the whole image. To validate the effectiveness of our proposed method, extensive experiments have been conducted on three popular benchmarks: Pascal VOC, Cityscapes and Pascal Context. Built upon a highly competitive baseline, our proposed method can improve the performance of a student network by 2.5% (mIOU boosts from 70.2 to 72.7 on the cityscapes test set) and can train a better compact model with only 8% float operations (FLOPS) of a model that achieves comparable performances.
引用
收藏
页码:578 / 587
页数:10
相关论文
共 50 条
  • [21] PDA: Progressive Domain Adaptation for Semantic Segmentation
    Liao, Muxin
    Tian, Shishun
    Zhang, Yuhang
    Hua, Guoguang
    Zou, Wenbin
    Li, Xia
    KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [22] Network adaptation for color image semantic segmentation
    An, Taeg-Hyun
    Kang, Jungyu
    Min, Kyoung-Wook
    IET IMAGE PROCESSING, 2023, 17 (10) : 2972 - 2983
  • [23] Uncertainty Reduction for Model Adaptation in Semantic Segmentation
    Teja, Prabhu S.
    Fleuret, Francois
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 9608 - 9618
  • [24] Multichannel Semantic Segmentation with Unsupervised Domain Adaptation
    Watanabe, Kohei
    Saito, Kuniaki
    Ushiku, Yoshitaka
    Harada, Tatsuya
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT V, 2019, 11133 : 600 - 616
  • [25] Continual BatchNorm Adaptation (CBNA) for Semantic Segmentation
    Klingner, Marvin
    Ayache, Mouadh
    Fingscheidt, Tim
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (11) : 20899 - 20911
  • [26] FDA: Fourier Domain Adaptation for Semantic Segmentation
    Yang, Yanchao
    Soatto, Stefano
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4084 - 4094
  • [27] Geometric Unsupervised Domain Adaptation for Semantic Segmentation
    Guizilini, Vitor
    Li, Jie
    Ambrus, Rares
    Gaidon, Adrien
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8517 - 8527
  • [28] Semantic adaptation of knowledge representation systems
    1600, Springer Science and Business Media, LLC (394):
  • [29] Unsupervised Domain Adaptation in Semantic Segmentation: A Review
    Toldo, Marco
    Maracani, Andrea
    Michieli, Umberto
    Zanuttigh, Pietro
    TECHNOLOGIES, 2020, 8 (02)
  • [30] On the Road to Online Adaptation for Semantic Image Segmentation
    Volpi, Riccardo
    De Jorge, Pau
    Larlus, Diane
    Csurka, Gabriela
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 19162 - 19173