Learning Lightweight Lane Detection CNNs by Self Attention Distillation

被引:386
|
作者
Hou, Yuenan [1 ]
Ma, Zheng [2 ]
Liu, Chunxiao [2 ]
Loy, Chen Change [3 ]
机构
[1] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[2] SenseTime Grp Ltd, Hong Kong, Peoples R China
[3] Nanyang Technol Univ, Nanyang, Henan, Peoples R China
关键词
D O I
10.1109/ICCV.2019.00110
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training deep models for lane detection is challenging due to the very subtle and sparse supervisory signals inherent in lane annotations. Without learning from much richer context, these models often fail in challenging scenarios, e.g., severe occlusion, ambiguous lanes, and poor lighting conditions. In this paper, we present a novel knowledge distillation approach, i.e., Self Attention Distillation (SAD), which allows a model to learn from itself and gains substantial improvement without any additional supervision or labels. Specifically, we observe that attention maps extracted from a model trained to a reasonable level would encode rich contextual information. The valuable contextual information can be used as a form of `free' supervision for further representation learning through performing top-down and layer-wise attention distillation within the network itself. SAD can be easily incorporated in any feed-forward convolutional neural networks (CNN) and does not increase the inference time. We validate SAD on three popular lane detection benchmarks (TuSimple, CULane and BDD100K) using lightweight models such as ENet, ResNet18 and ResNet-34. The lightest model, ENet-SAD, performs comparatively or even surpasses existing algorithms. Notably, ENet-SAD has 20 x fewer parameters and runs 10 x faster compared to the state-of-the-art SCNN [16], while still achieving compelling performance in all benchmarks.
引用
下载
收藏
页码:1013 / 1021
页数:9
相关论文
共 50 条
  • [31] Learning Lightweight Face Detector with Knowledge Distillation
    Jin, Haibo
    Zhang, Shifeng
    Zhu, Xiangyu
    Tang, Yinhang
    Lei, Zhen
    Li, Stan Z.
    2019 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB), 2019,
  • [32] Knowledge Distillation With Feature Self Attention
    Park, Sin-Gu
    Kang, Dong-Joong
    IEEE ACCESS, 2023, 11 : 34554 - 34562
  • [33] Boosting the Performance of Lightweight HAR Models with Attention and Knowledge Distillation
    Agac, Sumeyye
    Incel, Ozlem Durmaz
    2024 INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS, IE 2024, 2024, : 1 - 8
  • [34] Learning Lightweight and Superior Detectors with Feature Distillation for Onboard Remote Sensing Object Detection
    Gu, Lingyun
    Fang, Qingyun
    Wang, Zhaokui
    Popov, Eugene
    Dong, Ge
    REMOTE SENSING, 2023, 15 (02)
  • [35] Lightweight infrared detection of ammonia leakage using shuffle and self-attention
    Zhang, Yin-hui
    Zhuang, Hong
    He, Zi-fen
    Yang, Hong-kuan
    Huang, Ying
    CHINESE OPTICS, 2023, 16 (03) : 607 - 619
  • [36] Lane Line Detection Based on Object Feature Distillation
    Haris, Malik
    Glowacz, Adam
    ELECTRONICS, 2021, 10 (09)
  • [37] Lane Detection Method Based on Improved Multi-Head Self-Attention
    Ge, Zekun
    Tao, Fazhan
    Fu, Zhumu
    Song, Shuzhong
    Computer Engineering and Applications, 60 (02): : 264 - 271
  • [38] Curve-based lane estimation model with lightweight attention mechanism
    Jindong Zhang
    Haoting Zhong
    Signal, Image and Video Processing, 2023, 17 : 2637 - 2643
  • [39] YOGA: Deep object detection in the wild with lightweight feature learning and multiscale attention
    Sunkara, Raja
    Luo, Tie
    PATTERN RECOGNITION, 2023, 139
  • [40] Curve-based lane estimation model with lightweight attention mechanism
    Zhang, Jindong
    Zhong, Haoting
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) : 2637 - 2643