Classification and identification of agricultural products based on improved MobileNetV2

被引:1
|
作者
Chen, Haiwei [1 ]
Zhou, Guohui [1 ]
He, Wei [1 ]
Duan, Xiping [1 ]
Jiang, Huixin [2 ]
机构
[1] Harbin Normal Univ, Sch Comp Sci & Informat Engn, Harbin 150025, Peoples R China
[2] Harbin Normal Univ, Sch Life Sci & Technol, Harbin 150025, Peoples R China
关键词
COMPUTER VISION; DEEP; FRUIT;
D O I
10.1038/s41598-024-53349-w
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
With the advancement of technology, the demand for increased production efficiency has gradually risen, leading to the emergence of new trends in agricultural automation and intelligence. Precision classification models play a crucial role in helping farmers accurately identify, classify, and process various agricultural products, thereby enhancing production efficiency and maximizing the economic value of agricultural products. The current MobileNetV2 network model is capable of performing the aforementioned tasks. However, it tends to exhibit recognition biases when identifying different subcategories within agricultural product varieties. To address this challenge, this paper introduces an improved MobileNetV2 convolutional neural network model. Firstly, inspired by the Inception module in GoogLeNet, we combine the improved Inception module with the original residual module, innovatively proposing a new Res-Inception module. Additionally, to further enhance the model's accuracy in detection tasks, we introduce an efficient multi-scale cross-space learning module (EMA) and embed it into the backbone structure of the network. Experimental results on the Fruit-360 dataset demonstrate that the improved MobileNetV2 outperforms the original MobileNetV2 in agricultural product classification tasks, with an accuracy increase of 1.86%.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Classification and identification of agricultural products based on improved MobileNetV2
    Haiwei Chen
    Guohui Zhou
    Wei He
    Xiping Duan
    Huixin Jiang
    [J]. Scientific Reports, 14
  • [2] Classification of skin diseases based on improved MobileNetV2
    Cheng Yu Jia
    Lin Wei
    Liu Yuan Zhen
    Sun Lu
    [J]. PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 598 - 603
  • [3] Milling vibration state identification based on improved MobileNetV2
    Zheng, Hualin
    Tu, Lei
    Hu, Teng
    Wang, Xiaohu
    Mi, Liang
    [J]. Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, 2024, 30 (03): : 982 - 991
  • [4] Sheep face recognition and classification based on an improved MobilenetV2 neural network
    Pang, Yue
    Yu, Wenbo
    Zhang, Yongan
    Xuan, Chuanzhong
    Wu, Pei
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2023, 20 (01)
  • [5] Application of MobileNetV2 to waste classification
    Yong, Liying
    Ma, Le
    Sun, Dandan
    Du, Liping
    [J]. PLOS ONE, 2023, 18 (03):
  • [6] Semantic Segmentation Algorithm Based on Improved MobileNetV2
    Meng, Lu
    Xu, Lei
    Guo, Jia-Yang
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2020, 48 (09): : 1769 - 1776
  • [7] Classification of skin lesions with generative adversarial networks and improved MobileNetV2
    Wang, Hui
    Qi, Qianqian
    Sun, Weijia
    Li, Xue
    Dong, Boxin
    Yao, Chunli
    [J]. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2023, 33 (05) : 1561 - 1576
  • [8] Optimized MobileNetV2 Based on Model Pruning for Image Classification
    Xiao, Peng
    Pang, Yuliang
    Feng, Hao
    Hao, Yu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2022,
  • [9] A study on expression recognition based on improved mobilenetV2 network
    Zhu, Qiming
    Zhuang, Hongwei
    Zhao, Mi
    Xu, Shuangchao
    Meng, Rui
    [J]. SCIENTIFIC REPORTS, 2024, 14 (01)
  • [10] Distracted driving behavior recognition based on improved MobileNetV2
    Bai, Xuemei
    Li, Jialu
    Zhang, Chenjie
    Hu, Hanping
    Gu, Dongbing
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (05)