Flue-cured tobacco leaf grade detection through multi-receptive field features fusing adaptively and dynamic loss adjustment

被引:0
|
作者
He Z. [1 ]
Luo Y. [1 ]
Zhang Y. [1 ]
Chen G. [1 ]
Chen D. [1 ]
Xu L. [1 ]
机构
[1] Faculty of Mechanical and Electrical Engineering, Kunming University of Science and Technology, Kunming
关键词
dynamic loss adjustment; flue-cured tobacco leaf; multi receptive field feature fusion; object detection;
D O I
10.37188/OPE.20243202.0301
中图分类号
学科分类号
摘要
Rapid and accurate detection of flue-cured tobacco leaf grade is integral to the advancement of tobacco intelligent equipment, promoting refined management of agricultural products. Aiming at the issue that it is difficult to distinguish flue-cured tobacco leaves with high similarity between different grades, a flue-cured tobacco leaf grade detection network(FTGDNet)through multi-receptive field feature fusing adaptively and dynamic loss adjustment was proposed. Firstly, FTGDNet adopted CSPNet and GhostNet as feature extraction backbone network and auxiliary feature extraction network to enhance the model feature extraction ability, respectively;Secondly, to merge global feature information and local detail feature information, an explicit visual center bottleneck module(EVCB)was embedded at the end of backbone network;Moreover, a multi-receptive field feature adaptive fusion module(MRFA_d)was constructed, in which the attention feature fusion(AFF)mechanism adaptively fuses the weights of feature maps with different receptive fields to highlight the effective channel information while enhancing the local receptive fields of the model;In addressing the decrease of positioning accuracy due to CIoU_Loss performance degradation when the prediction box and real box shared the same aspect ratio and their centers align during the regression positioning process, a new positioning loss function MCIoU_Loss was designed, In addition, the rectangular similarity attenuation coefficient was introduced to dynamically adjust the similarity discriminant of prediction box and real box to accelerate the model fitting. The experimental results show that the verification accuracy and test accuracy of FTGDNet for 10 grades of flue-cured tobacco leaf reached 90.0% and 87.4%, respectively, with an inference time of 12.6 ms. Compared with various advanced object detection network, FTGDNet achieves higher detection accuracy and faster detection speed, which could provide technical support for high-precision flue-cured tobacco leaf grade detection. © 2024 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:301 / 316
页数:15
相关论文
共 33 条
  • [21] ZHANG X Y, ZHOU X Y, LIN M X, Et al., ShuffleNet: an extremely efficient convolutional neural network for mobile devices [C], 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6848-6856, (2018)
  • [22] REN S Q, HE K M, GIRSHICK R, Et al., Faster R-CNN:towards real-time object detection with region proposal networks[J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 6, pp. 1137-1149, (2017)
  • [23] WU Y, CHEN Y P, YUAN L, Et al., Rethinking classification and localization for object detection [C], 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10183-10192, (2020)
  • [24] ZHANG H K, CHANG H, MA B P, Et al., Dynamic R-CNN:Towards High Quality Object Detection via Dynamic Training[M], Computer Vision – ECCV 2020, pp. 260-275, (2020)
  • [25] WANG J Q, ZHANG W W, CAO Y H, Et al., Side-Aware Boundary Localization for More Precise Object Detection[M], Computer Vision – ECCV 2020, pp. 403-419, (2020)
  • [26] FENG C J, ZHONG Y J, GAO Y, Et al., TOOD: task-aligned one-stage object detection [C], 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3490-3499, (2021)
  • [27] SUN P Z, ZHANG R F, JIANG Y, Et al., Sparse R-CNN:end-to-end object detection with learnable proposals, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14449-14458, (2021)
  • [28] REDMON J, FARHADI A., YOLOv3:an Incremental Improvement, (2018)
  • [29] GE Z, LIU S T, WANG F, Et al., YOLOX:Exceeding YOLO Series in 2021, (2021)
  • [30] KIZILAY E, AYDIN I., A YOLOR based visual detection of amateur drones[C], 2022 International Conference on Decision Aid Sciences and Applications (DASA), pp. 1446-1449, (2022)