Training a thin and shallow lane detection network with self-knowledge distillation

被引:2
|
作者
Dai, Xuerui [1 ]
Yuan, Xue [1 ]
Wei, Xueye [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
lane detection; deep learning; self-knowledge distillation; ROAD; VISION;
D O I
10.1117/1.JEI.30.1.013004
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With modern science and technology development, vehicles are equipped with intelligent driver assistant systems, of which lane detection is a key function. These complex detection structures (either wide or deep) are investigated to boost the accuracy and overcome the challenges in complicated scenarios. However, the computation and memory storage cost will increase sharply, and the response time will also increase. For resource-constrained devices, lane detection networks with a low cost and short inference time should be implemented. To get more accurate lane detection results, the large (deep and wide) detection structure is framed for high-dimensional and highly robust features, and deep supervision loss is applied on different resolutions and stages. Despite the high-precision advantage, the large detection network cannot be used for embedded devices directly because of the demand for memory and computation. To make the network thinner and lighter, a general training strategy, called self-knowledge distillation (SKD), is proposed. It is different from classical knowledge distillation; there are no independent teacher-student networks, and the knowledge is distilled itself. To evaluate more comprehensively and precisely, a new lane data set is collected. The Caltech Lane date set and TuSimple lane data set are also used for evaluation. Experiments further prove that a small student network and large teacher network have a similar detection accuracy via SKD, and the student network has a shorter inference time and lower memory usage. Thus it can be applied for resource-limited devices flexibly. ? 2021 SPIE and IS&T [DOI: 10.1117/1.JEI.30.1.013004]
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Two-Stage Approach for Targeted Knowledge Transfer in Self-Knowledge Distillation
    Yin, Zimo
    Pu, Jian
    Zhou, Yijie
    Xue, Xiangyang
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (11) : 2270 - 2283
  • [32] From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
    Yang, Zhendong
    Zeng, Ailing
    Li, Zhe
    Zhang, Tianke
    Yuan, Chun
    Li, Yu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17139 - 17148
  • [33] Two-Stage Approach for Targeted Knowledge Transfer in Self-Knowledge Distillation
    Zimo Yin
    Jian Pu
    Yijie Zhou
    Xiangyang Xue
    IEEE/CAA Journal of Automatica Sinica, 2024, 11 (11) : 2270 - 2283
  • [34] Self-knowledge distillation based on knowledge transfer from soft to hard examples
    Tang, Yuan
    Chen, Ying
    Xie, Linbo
    IMAGE AND VISION COMPUTING, 2023, 135
  • [35] Enhanced ProtoNet With Self-Knowledge Distillation for Few-Shot Learning
    Habib, Mohamed El Hacen
    Kucukmanisa, Ayhan
    Urhan, Oguzhan
    IEEE ACCESS, 2024, 12 : 145331 - 145340
  • [36] Uncertainty Driven Adaptive Self-Knowledge Distillation for Medical Image Segmentation
    Guo, Xutao
    Wang, Mengqi
    Xiang, Yang
    Yang, Yanwu
    Ye, Chenfei
    Wang, Haijun
    Ma, Ting
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025,
  • [37] AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation
    Kim, Hyungmin
    Suh, Sungho
    Baek, Sunghyun
    Kim, Daehwan
    Jeong, Daun
    Cho, Hansang
    Kim, Junmo
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [38] Lightweight Human Pose Estimation Based on Densely Guided Self-Knowledge Distillation
    Wu, Mingyue
    Zhao, Zhong-Qiu
    Li, Jiajun
    Tian, Weidong
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II, 2023, 14255 : 421 - 433
  • [39] Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
    Boo, Yoonho
    Shin, Sungho
    Choi, Jungwook
    Sung, Wonyong
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 6794 - 6802
  • [40] Personalized Federated Learning for Heterogeneous Edge Device: Self-Knowledge Distillation Approach
    Singh, Neha
    Rupchandani, Jatin
    Adhikari, Mainak
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 4625 - 4632