Multi-view knowledge distillation for efficient semantic segmentation

被引:4
|
作者
Wang, Chen [1 ]
Zhong, Jiang [1 ]
Dai, Qizhu [1 ]
Qi, Yafei [2 ]
Shi, Fengyuan [3 ]
Fang, Bin [1 ]
Li, Xue [4 ]
机构
[1] Chongqing Univ, Sch Comp Sci, Chongqing 400044, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Peoples R China
[3] Northeastern Univ, Sch Informat Sci & Engn, Shenyang 110819, Peoples R China
[4] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
基金
中国国家自然科学基金;
关键词
Multi-view learning; Knowledge distillation; Knowledge aggregation; Semantic segmentation; ENSEMBLE;
D O I
10.1007/s11554-023-01296-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current state-of-the-art semantic segmentation models achieve remarkable success in segmentation accuracy. However, the huge model size and computing cost restrict their applications on low-latency online systems or devices. Knowledge distillation has been one popular solution for compressing large-scale segmentation models, which train a small segmentation model from a large teacher model. However, one teacher model's knowledge may be insufficiently diverse to train an accurate student model. Meanwhile, the student model may inherit bias from the teacher model. This paper proposes a multi-view knowledge distillation framework called MVKD for efficient semantic segmentation. MVKD could aggregate the multi-view knowledge from multiple teacher models and transfer the multi-view knowledge to the student model. In MVKD, we introduce one multi-view co-tuning strategy to acquire uniformity among the multi-view knowledge in features from different teachers. In addition, we propose a multi-view feature distillation loss and a multi-view output distillation loss to transfer the multi-view knowledge in the features and outputs from multiple teachers to the student. We evaluate the proposed MVKD on three benchmark datasets, Cityscapes, CamVid, and Pascal VOC 2012. Experimental results demonstrate the effectiveness of the proposed MVKD in compressing semantic segmentation models.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] 3D SEMANTIC SEGMENTATION FROM MULTI-VIEW OPTICAL SATELLITE IMAGES
    d'Angelo, Pablo
    Cerra, Daniele
    Azimi, Seyed Majid
    Merkle, Nina
    Tian, Jiaojiao
    Auer, Stefan
    Pato, Miguel
    de los Reyes, Raquel
    Zhuo, Xiangyu
    Bittner, Ksenia
    Krauss, Thomas
    Reinartz, Peter
    2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, : 5053 - 5056
  • [32] Semantic segmentation of mobile mapping point clouds via multi-view label transfer
    Peters, Torben
    Brenner, Claus
    Schindler, Konrad
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2023, 202 : 30 - 39
  • [33] Multi-view based neural network for semantic segmentation on 3D scenes
    Lu, Yonghua
    Zhen, Mingmin
    Fang, Tian
    SCIENCE CHINA-INFORMATION SCIENCES, 2019, 62 (12)
  • [34] Multi-view based neural network for semantic segmentation on 3D scenes
    Yonghua Lu
    Mingmin Zhen
    Tian Fang
    Science China Information Sciences, 2019, 62
  • [35] Semantic segmentation method for continuous images based on multi-level knowledge distillation
    Ling Z.
    Li X.
    Zhang T.
    Chen L.
    Sun L.
    Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2023, 29 (04): : 1244 - 1253
  • [36] Weather-degraded image semantic segmentation with multi-task knowledge distillation
    Li, Zhi
    Wu, Xing
    Wang, Jianjia
    Guo, Yike
    IMAGE AND VISION COMPUTING, 2022, 127
  • [37] IOSSAM: Label Efficient Multi-view Prompt-Driven Tooth Segmentation
    Huang, Xinrui
    He, Dongming
    Li, Zhenming
    Zhang, Xiaofan
    Wang, Xudong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT I, 2024, 15001 : 632 - 642
  • [38] Structural and Statistical Texture Knowledge Distillation for Semantic Segmentation
    Ji, Deyi
    Wang, Haoran
    Tao, Mingyuan
    Huang, Jianqiang
    Hua, Xian-Sheng
    Lu, Hongtao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16855 - 16864
  • [39] Semantic knowledge distillation for conjunctival goblet cell segmentation
    Jang, Seunghyun
    Seo, Kyungdeok
    Kang, Hyunyoung
    Kim, Seonghan
    Kang, Seungyoung
    Kim, Ki Hean
    Yang, Sejung
    IMAGING, MANIPULATION, AND ANALYSIS OF BIOMOLECULES, CELLS, AND TISSUES XXI, 2023, 12383
  • [40] Towards Comparable Knowledge Distillation in Semantic Image Segmentation
    Niemann, Onno
    Vox, Christopher
    Werner, Thorben
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT IV, 2025, 2136 : 185 - 200