Detection of surfacing white shrimp under hypoxia based on improved lightweight YOLOv5 model

被引:5
|
作者
Ran, Xun [1 ,2 ,3 ,4 ]
Li, Beibei [1 ,2 ,3 ,4 ]
Li, Daoliang [1 ,2 ,3 ,4 ]
Wang, Jianping [5 ]
Duan, Qingling [1 ,2 ,3 ,4 ]
机构
[1] China Agr Univ, Natl Innovat Ctr Digital Fishery, Beijing 100083, Peoples R China
[2] Minist Agr & Rural Affairs, Key Lab Smart Farming Technol Aquat Anim & Livesto, Beijing 100083, Peoples R China
[3] China Agr Univ, Beijing Engn & Technol Res Ctr Internet Things Agr, Beijing 100083, Peoples R China
[4] China Agr Univ, Coll Informat & Elect Engn, Beijing 100083, Peoples R China
[5] Ningbo Ocean & Fishery Res Inst, Zhejiang 315000, Peoples R China
关键词
White shrimp; Abnormal behavior; YOLOv5; Ghost convolution; BROWN SHRIMP;
D O I
10.1007/s10499-023-01149-w
中图分类号
S9 [水产、渔业];
学科分类号
0908 ;
摘要
White shrimp typically surface to breathe in the absence of adequate oxygen, and detecting this abnormal behavior can help better exploit the benefits of white shrimp farming. Therefore, an accurate and efficient model for detecting white shrimp surfacing was developed in this study. The proposed method is based on the YOLOv5 model, which utilizes ghost convolution to optimize standard convolution. A ghost bottleneck was constructed to improve the performance of the original bottleneck, and a more efficient detection layer was built based on the data. The model was trained and verified using a self-built white shrimp surfacing dataset. The mAP(@0.5) of this model was 98.139%, while the size and floating-point operations were only 1.76 MB and 2.1 G, respectively. Compared with Faster-RCNN, single-shot multi-box detector (SSD), and YOLOv4-tiny, our model presents higher detection accuracy and speed, as well as lower computation cost and smaller model size. Finally, based on the proposed method, we developed related applications for detecting shrimp surfacing.
引用
收藏
页码:3601 / 3618
页数:18
相关论文
共 50 条
  • [41] Shrimp Larvae Counting Based on Improved YOLOv5 Model with Regional Segmentation
    Duan, Hongchao
    Wang, Jun
    Zhang, Yuan
    Wu, Xiangyu
    Peng, Tao
    Liu, Xuhao
    Deng, Delong
    SENSORS, 2024, 24 (19)
  • [42] A Pedestrian Detection Network Model Based on Improved YOLOv5
    Li, Ming-Lun
    Sun, Guo-Bing
    Yu, Jia-Xiang
    ENTROPY, 2023, 25 (02)
  • [43] Improved Pedestrian Fall Detection Model Based on YOLOv5
    Fengl, Yuhua
    Wei, Yi
    Lie, Kejiang
    Feng, Yuandan
    Gan, Zhiqiang
    2022 IEEE 6TH ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 2022, : 410 - 413
  • [44] Insulator Defect Detection Based on Improved YOLOv5 Model
    Chen, Yongxin
    Du, Zhenan
    Li, Hengxuan
    Zhang, Kanjun
    Wen, Pei
    2024 IEEE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE, SEAI 2024, 2024, : 123 - 127
  • [45] Traffic Sign Detection Based on Improved YOLOv5 Model
    Zhao, Yibing
    Wang, Yannan
    Xing, Shuyong
    Guo, Lie
    SMART TRANSPORTATION AND GREEN MOBILITY SAFETY, GITSS 2022, 2024, 1201 : 293 - 307
  • [46] Improved YOLOv5 Smoke Detection Model
    Zheng, Yuanpan
    Xu, Boyang
    Wang, Zhenyu
    Computer Engineering and Applications, 2023, 59 (07): : 214 - 221
  • [47] Lightweight safflower cluster detection based on YOLOv5
    Guo, Hui
    Wu, Tianlun
    Gao, Guomin
    Qiu, Zhaoxin
    Chen, Haiyang
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [48] LE-YOLOv5: A Lightweight and Efficient Road Damage Detection Algorithm Based on Improved YOLOv5
    Diao, Zhuo
    Huang, Xianfu
    Liu, Han
    Liu, Zhanwei
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2023, 2023
  • [49] Lightweight improved yolov5 model for cucumber leaf disease and pest detection based on deep learning
    Omer, Saman M.
    Ghafoor, Kayhan Z.
    Askar, Shavan K.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (02) : 1329 - 1342
  • [50] Lightweight improved yolov5 model for cucumber leaf disease and pest detection based on deep learning
    Saman M. Omer
    Kayhan Z. Ghafoor
    Shavan K. Askar
    Signal, Image and Video Processing, 2024, 18 : 1329 - 1342