Image classification of insect pests based on saliency detection

被引:0
|
作者
Zhao H.-W. [1 ]
Huo D.-S. [1 ]
Wang J. [2 ]
Li X.-N. [3 ]
机构
[1] College of Computer Science and Technology, Jilin University, Changchun
[2] Shanghai Academy of Spaceflight Technology, Shanghai
[3] College of Computer Science and Technology, Changchun Normal University, Changchun
来源
Li, Xiao-Ning (lixiaoning@ccsfu.edu.cn) | 1600年 / Editorial Board of Jilin University卷 / 51期
关键词
Computer application; Data enhancement; insect pest classification; Multi-feature fusion; Significance detection;
D O I
10.13229/j.cnki.jdxbgxb20200749
中图分类号
学科分类号
摘要
Due to the diverse species of pests and the large differences between classes, it is difficult to achieve accurate classification. Based on the two processes of target positioning and target recognition when the human visual system recognizes objects, a pest classification model PestNet is designed. The model is mainly composed of Object Positioning Module (OPM) and Multi-Feature Fusion Module (MFFM). OPM integrates the shallow detail information and deep spatial information of the image through a U-shaped network structure, preliminarily delineating significant areas, and outputting spatial semantic features. MFFM performs bilinear pooling operations on spatial semantic features and abstracts semantic features to weaken background information and increase detailed features. In addition, training is assisted by target area clipping and masking to improve model classification accuracy. We conducted experiments on the disease and insect pest data set IP102, and the model classification accuracy rate reached 77.40%, which can realize the classification and recognition of large-scale pest images under complex background. © 2021, Jilin University Press. All right reserved.
引用
收藏
页码:2174 / 2181
页数:7
相关论文
共 22 条
  • [1] Serre T, Wolf L, Bileschi S, Et al., Robust object recognition with cortex-like mechanisms, IEEE Transactions on Pattern Analysis and Machine Intelligence, 29, 3, pp. 411-426, (2007)
  • [2] Cheng X, Zhang Y H, Chen Y Q, Et al., Pest identification via deep residual learning in complex background, Computers and Electronics in Agriculture, 141, pp. 351-356, (2017)
  • [3] Che Ying, Feng Xiao, Zheng Hong-liang, Denoisng method of contrast-enhanced ultrasound image based on convolutional neural networks, Journal of Jilin University Science Edition, 59, 5, pp. 1256-1259, (2021)
  • [4] Li Wei-wei, Method for removing salt and pepper noise from non local switchable filter image, Journal of Jilin University Science Edition, 57, 4, pp. 910-916, (2019)
  • [5] Goodale M A, Milner A D., Separate visual pathways for perception and action, Trends in Neurosciences, 15, 1, pp. 20-25, (1992)
  • [6] Zheng Ya-yu, Tian Xiang, Chen Yao-wu, Visual attention model based on fussion of spatiotemporal features, Journal of Jilin University(Engineering and Technology Edition), 39, 6, pp. 1625-1630, (2009)
  • [7] Li Peng-song, Li Jun-da, Wu Liang-wu, Et al., Image recognition algorithm based on threshold segmentation method and convolutional neural network, Journal of Jilin University Science Edition, 58, 6, pp. 1436-1442, (2020)
  • [8] Fu Bo, Wang Rui-zi, Wang Li-yan, Et al., Enhancement method of underwater color cast image based on deep convolutional neural network, Journal of Jilin University Science Edition, 59, 4, pp. 891-899, (2021)
  • [9] Gao Yun-long, Wu Chuan, Zhu Ming, Short text classification model based on improved convolutional neural network, Journal of Jilin University Science Edition, 58, 4, pp. 923-930, (2020)
  • [10] Ronneberger O, Fischer P, Brox T., U-net: convolutional networks for biomedical image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, (2015)