To bolster the safety of autonomous and assisted driving systems, the im-perative of achieving a synergy between real-time processing and high accuracy in lane detection cannot be overstated. Addressing the challenges posed by the intricate nature of lane detection algorithms and the concomitant degradation of accuracy due to the loss of information on small-scale targets, this study introduces an enhanced lane detection model predicated on the DeeplabV3+ framework. The model integrates the lightweight MobilenetV2 as the foundational backbone network to meet the exigencies of real-time operation. In parallel, the incorporation of the Multi-scale Feature Extraction Enhancement Module is meticulously designed to counter the heterogeneous distribution of lane dimensions, thereby bolstering the model’s capability to accurately predict diminutive tar-gets, including marginal lanes and those at extended distances. In an innovative stride, this research proposes the Convolutional Block Weighted Attention Module, meticulously devised to refine the distribution of attentional resources across both channels and spatial dimensions, which in turn augments the model’s efficacy in processing clusters of pixel points within homogenous semantic classifications. The Feature Fusion Module is judi-ciously engineered to produce semantically enriched feature maps. By implementing skip connections at strategic junctures between the encoding and decoding layers, the model achieves an efficacious fusion of features across varying depths, culminating in a marked enhancement of segmentation performance.Empirical analysis conducted on a representative dataset corroborates the model’s prowess, as evidenced by an impressive 99.48% Accuracy and an 88.22% mIoU, all while maintaining a brisk prediction latency of merely 35.12 ms per image. These findings underscore the proposed model’s exceptional capac-ity to deliver real-time performance without compromising on accuracy, setting a new benchmark in the domain of lane detection. © 2024, Taiwan Ubiquitous Information CO LTD. All rights reserved.