Training a thin and shallow lane detection network with self-knowledge distillation

被引:2
|
作者
Dai, Xuerui [1 ]
Yuan, Xue [1 ]
Wei, Xueye [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
lane detection; deep learning; self-knowledge distillation; ROAD; VISION;
D O I
10.1117/1.JEI.30.1.013004
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With modern science and technology development, vehicles are equipped with intelligent driver assistant systems, of which lane detection is a key function. These complex detection structures (either wide or deep) are investigated to boost the accuracy and overcome the challenges in complicated scenarios. However, the computation and memory storage cost will increase sharply, and the response time will also increase. For resource-constrained devices, lane detection networks with a low cost and short inference time should be implemented. To get more accurate lane detection results, the large (deep and wide) detection structure is framed for high-dimensional and highly robust features, and deep supervision loss is applied on different resolutions and stages. Despite the high-precision advantage, the large detection network cannot be used for embedded devices directly because of the demand for memory and computation. To make the network thinner and lighter, a general training strategy, called self-knowledge distillation (SKD), is proposed. It is different from classical knowledge distillation; there are no independent teacher-student networks, and the knowledge is distilled itself. To evaluate more comprehensively and precisely, a new lane data set is collected. The Caltech Lane date set and TuSimple lane data set are also used for evaluation. Experiments further prove that a small student network and large teacher network have a similar detection accuracy via SKD, and the student network has a shorter inference time and lower memory usage. Thus it can be applied for resource-limited devices flexibly. ? 2021 SPIE and IS&T [DOI: 10.1117/1.JEI.30.1.013004]
引用
收藏
页数:19
相关论文
共 50 条
  • [21] A Multi-Scale Convolutional Neural Network with Self-Knowledge Distillation for Bearing Fault Diagnosis
    Yu, Jiamao
    Hu, Hexuan
    MACHINES, 2024, 12 (11)
  • [22] SELF-KNOWLEDGE DISTILLATION VIA FEATURE ENHANCEMENT FOR SPEAKER VERIFICATION
    Liu, Bei
    Wang, Haoyu
    Chen, Zhengyang
    Wang, Shuai
    Qian, Yanmin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7542 - 7546
  • [23] MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition
    Yang, Chuanguang
    An, Zhulin
    Zhou, Helong
    Cai, Linhang
    Zhi, Xiang
    Wu, Jiwen
    Xu, Yongjun
    Zhang, Qian
    COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 534 - 551
  • [24] A Novel Small Target Detection Strategy: Location Feature Extraction in the Case of Self-Knowledge Distillation
    Liu, Gaohua
    Li, Junhuan
    Yan, Shuxia
    Liu, Rui
    APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [25] Personalized Edge Intelligence via Federated Self-Knowledge Distillation
    Jin, Hai
    Bai, Dongshan
    Yao, Dezhong
    Dai, Yutong
    Gu, Lin
    Yu, Chen
    Sun, Lichao
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (02) : 567 - 580
  • [26] Self-Knowledge Distillation for First Trimester Ultrasound Saliency Prediction
    Gridach, Mourad
    Savochkina, Elizaveta
    Drukker, Lior
    Papageorghiou, Aris T.
    Noble, J. Alison
    SIMPLIFYING MEDICAL ULTRASOUND, ASMUS 2022, 2022, 13565 : 117 - 127
  • [27] Automatic Diabetic Retinopathy Grading via Self-Knowledge Distillation
    Luo, Ling
    Xue, Dingyu
    Feng, Xinglong
    ELECTRONICS, 2020, 9 (09) : 1 - 13
  • [28] Decoupled Feature and Self-Knowledge Distillation for Speech Emotion Recognition
    Yu, Haixiang
    Ning, Yuan
    IEEE ACCESS, 2025, 13 : 33275 - 33285
  • [29] Teaching Yourself: A Self-Knowledge Distillation Approach to Action Recognition
    Duc-Quang Vu
    Le, Ngan
    Wang, Jia-Ching
    IEEE ACCESS, 2021, 9 : 105711 - 105723
  • [30] Active Learning for Lane Detection: A Knowledge Distillation Approach
    Peng, Fengchao
    Wang, Chao
    Liu, Jianzhuang
    Yang, Zhen
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 15132 - 15141