Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style Transfer

被引:0
|
作者
Lou, Jianlou [1 ]
Liang, Feng [1 ]
Qu, Zhaoyang [1 ]
Li, Xiangyu [1 ]
Chen, Keyu [1 ]
He, Bochuan [1 ]
机构
[1] Northeast Elect Power Univ, Sch Comp Engn, Jilin 132012, Peoples R China
关键词
autonomous driving; obscured lane detection; light style transfer; fine-grained features;
D O I
10.3103/S0146411623020050
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning technology is widely used in lane detection, but applying this technology to conditions such as environmental occlusion and low light remains challenging. On the one hand, obtaining lane information before and after the occlusion in low-light conditions using an ordinary convolutional neural network (CNN) is impossible. On the other hand, only a small amount of lane data (such as CULane) have been collected under low-light conditions, and the new data require considerable manual labeling. Given the above problems, we propose a double attention recurrent feature-shift aggregator (DARESA) module, which uses the prior knowledge of the lane shape in space and channel dimensions, and enriches the original lane features by repeatedly capturing pixel information across rows and columns. This indirectly increased the global feature information and ability of the network to extract feature fine-grained information. Moreover, we trained an unsupervised low-light style transfer model suitable for autonomous driving scenarios. The model transferred the daytime images in the CULane dataset to low-light images, eliminating the cost of manual labeling. In addition, adding an appropriate number of generated images to the training set can enhance the environmental adaptability of the lane detector, yielding better detection results than those achieved by using CULane only.
引用
收藏
页码:143 / 153
页数:11
相关论文
共 50 条
  • [1] Lane Detection Method under Low-Light Conditions Combining Feature Aggregation and Light Style Transfer
    Feng Jianlou Lou
    Zhaoyang Liang
    Xiangyu Qu
    Keyu Li
    Bochuan Chen
    [J]. Automatic Control and Computer Sciences, 2023, 57 : 143 - 153
  • [2] Lane Detection in Low-light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer
    Liu, Tong
    Chen, Zhaowei
    Yang, Yi
    Wu, Zehao
    Li, Haowei
    [J]. 2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 1394 - 1399
  • [3] Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
    Ke, Changshuo
    Xu, Zhijie
    Zhang, Jianqin
    Zhang, Dongmei
    [J]. SENSORS, 2023, 23 (10)
  • [4] Low-Light Object Detection Combining Transformer and Dynamic Feature Fusion
    Cai, Teng
    Chen, Cifa
    Dong, Fangmin
    [J]. Computer Engineering and Applications, 2024, 60 (09) : 135 - 141
  • [5] Low-light DEtection TRansformer (LDETR): object detection in low-light and adverse weather conditions
    Tiwari A.K.
    Pattanaik M.
    Sharma G.K.
    [J]. Multimedia Tools and Applications, 2024, 83 (36) : 84231 - 84248
  • [6] Feature Map Guided Adapter Network for Object Detection in Low-light Conditions
    Pang, Cong
    Zhou, Wei
    Li, Haoyan
    Zhang, Xiangyu
    Lou, Xin
    [J]. 2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [7] Efficient adaptive feature aggregation network for low-light image enhancement
    Li, Canlin
    Gao, Pengcheng
    Liu, Jinhua
    Song, Shun
    Bi, Lihua
    [J]. PLOS ONE, 2022, 17 (08):
  • [8] Single-stage Face Detection under Extremely Low-light Conditions
    Yu, Jun
    Hao, Xinlong
    He, Peng
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3516 - 3525
  • [9] Anomaly Detection on the Edge Using Smart Cameras under Low-Light Conditions
    Abu Awwad, Yaser
    Rana, Omer
    Perera, Charith
    [J]. SENSORS, 2024, 24 (03)
  • [10] Pedestrian detection in low-light conditions: A comprehensive survey
    Ghari, Bahareh
    Tourani, Ali
    Shahbahrami, Asadollah
    Gaydadjiev, Georgi
    [J]. IMAGE AND VISION COMPUTING, 2024, 148