With the rapid advancement of deep learning algorithms, object detectors have achieved impressive performance in practical applications. An efficient detection framework is essential for performing detection tasks on devices with limited computational resources. However, current detection algorithms often face challenges due to their complexity, including numerous parameters and significant computational demands. To overcome these challenges, this paper introduces a streamlined and effective detection method. The integration of the FasterNet Block into the Cross-Stage Partial Network (C3) of the backbone reduces computational and storage demands. Additionally, by introducing cross-scale feature fusion in the neck network, the computational load and parameter requirements during inference are further decreased. Meanwhile, the dynamic head with multi-scale processing and Shape-IoU enhances detection accuracy and robustness, achieving a balance between lightweight design and performance. Compared to the original YOLOv5 models, the proposed lightweight method reduces the number of parameters by 29.4 to 43.0%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} and compresses the size of the model by 31.6 to 42.7%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} while maintaining a high mAP@0.5. Furthermore, the designed models achieve a faster inference speed since the computations could be reduced by more than 30%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}. In robustness experiments under varying lighting conditions, the proposed model demonstrates stable detection performance even in challenging lighting scenarios, showing its reliability in real-world applications. In conclusion, our research offers considerable improvements in model accuracy, parameter efficiency, and size compared to the mainstream object detection algorithms.