Existing deep learning-based traffic sign recognition methods often face challenges such as low accuracy, slow detection response, and limited applicability in real-world scenarios. This paper proposes an improved model based on YOLOv8, aiming to enhance detection accuracy and reduce model complexity. To achieve this, the Adaptive Downsampling (ADown) method is integrated into the Backbone Conv module, effectively reducing input feature dimensions and minimizing redundant parameters. Additionally, the Deformable Attention Transformer (DAT) mechanism is incorporated into the C2f module, dynamically adjusting the attention range to capture more relevant visual features. Furthermore, Wise and Efficient IoU (Wise-EIoU) is employed as the loss function to improve classification performance in object detection tasks. The proposed algorithm is evaluated on the Roadsign and CCTSDB2021 datasets, achieving significant performance improvements. On the Roadsign dataset, the model achieves a precision of 94.9%, with mAP50 and mAP50:95 reaching 97.1% and 85.9%, respectively, surpassing the YOLOv8n benchmark by 2.7% and 4.2% on the same metrics. The number of model parameters is reduced to 2.79M, reflecting a 7.5% decrease compared to YOLOv8n. To verify its practical applicability, experiments are conducted on the PIX micro smart car platform. The proposed model achieves an inference speed of 59 FPS, representing a 15.6% improvement over the baseline, demonstrating its suitability for real-time traffic sign detection tasks. These results validate the effectiveness of the YOLOv8-ADDA model in improving detection accuracy and reducing computational complexity. © 2025 IOP Publishing Ltd. All rights, including for text and data mining, AI training, and similar technologies, are reserved.