PhenoNet: A two-stage lightweight deep learning framework for real-time wheat phenophase classification

被引:8
|
作者
Zhang, Ruinan [1 ]
Jin, Shichao [1 ]
Zhang, Yuanhao [1 ]
Zang, Jingrong [1 ]
Wang, Yu [1 ]
Li, Qing [1 ]
Sun, Zhuangzhuang [1 ]
Wang, Xiao [1 ]
Zhou, Qin [1 ]
Cai, Jian [1 ]
Xu, Shan [1 ]
Su, Yanjun [2 ]
Wu, Jin [3 ]
Jiang, Dong [1 ]
机构
[1] Nanjing Agr Univ, Acad Adv Interdisciplinary Studies, Plant Phen Res Ctr,Collaborat Innovat Ctr Modern C, Coll Agr,State Key Lab Crop Genet & Germplasm Enha, Nanjing 210095, Peoples R China
[2] Chinese Acad Sci, Inst Bot, State Key Lab Vegetat & Environm Change, Beijing 100093, Peoples R China
[3] Univ Hong Kong, Inst Climate & Carbon Neutral, Sch Biol Sci, Pokfulam Rd, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Wheat phenology; Dataset; Image classification; Deep learning; Transfer learning; Web application; PHENOLOGY; SIMULATION;
D O I
10.1016/j.isprsjprs.2024.01.006
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
The real-time monitoring of wheat phenology variations among different varieties and their adaptive responses to environmental conditions is essential for advancing breeding efforts and improving cultivation management. Many remote sensing efforts have been made to relieve the challenges of key phenophase detection. However, existing solutions are not accurate enough to discriminate adjacent phenophases with subtle organ changes, and they are not real-time, such as the vegetation index curve -based methods relying on entire growth stage data after the experiment was finished. Furthermore, it is key to improving the efficiency, scalability, and availability of phenological studies. This study proposes a two -stage deep learning framework called PhenoNet for the accurate, efficient, and real-time classification of key wheat phenophases. PhenoNet comprises a lightweight encoder module (PhenoViT) and a long short-term memory (LSTM) module. The performance of PhenoNet was assessed using a well -labeled, multi -variety, and large -volume dataset (WheatPheno). The results show that PhenoNet achieved an overall accuracy (OA) of 0.945, kappa coefficients (Kappa) of 0.928, and F1 -score (F1) of 0.941. Additionally, the network parameters (Params), number of operations measured by multiply -adds (MAdds), and graphics processing unit memory required for classification (Memory) were 0.889 million (M), 0.093 Giga times (G), and 8.0 Megabytes (MB), respectively. PhenoNet outperformed eleven state-of-the-art deep learning networks, achieving an average improvement of 3.7% in OA, 5.1% in Kappa, and 4.1% in F1, while reducing average Params, MAdds, and Memory by 78.4%, 85.0%, and 75.1%, respectively. The feature visualization and ablation analysis explained that PhenoNet mainly benefited from using time -series information and lightweight modules. Furthermore, PhenoNet can be effectively transferred across years, achieving a high OA of 0.981 using a two -stage transfer learning strategy. Furthermore, an extensible web platform that integrates WheatPheno and PhenoNet and ensures that the work done in this study is accessible, interoperable, and reusable has been developed (https://phenonet.org/).
引用
收藏
页码:136 / 157
页数:22
相关论文
共 50 条
  • [1] A Two-Stage Framework for Real-Time Guidewire Endpoint Localization
    Li, Rui-Qi
    Bian, Guibin
    Zhou, Xiaohu
    Xie, Xiaoliang
    Ni, ZhenLiang
    Hou, Zengguang
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT V, 2019, 11768 : 357 - 365
  • [2] A Two-Stage Real-Time Gesture Recognition Framework for UAV Control
    Zhang, Buyuan
    Zhang, Haoyang
    Zhen, Tao
    Ji, Bowen
    Xie, Liang
    Yan, Ye
    Yin, Erwei
    IEEE SENSORS JOURNAL, 2024, 24 (15) : 24770 - 24782
  • [3] Deep-learning-based two-stage approach for real-time explicit topology optimization
    Sun S.-Y.
    Cheng W.-B.
    Zhang H.-Z.
    Deng X.-P.
    Qi H.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2023, 53 (10): : 2942 - 2951
  • [4] REAL-TIME WHEAT DETECTION BASED ON LIGHTWEIGHT DEEP LEARNING NETWORK REPYOLO MODEL
    Bi, Zhifang
    Li, Yanwen
    Guan, Jiaxiong
    Zhang, Xiaoying
    INMATEH-AGRICULTURAL ENGINEERING, 2024, 72 (01): : 601 - 610
  • [5] Real-time scheduling for two-stage assembly flowshop with dynamic job arrivals by deep reinforcement learning
    Chen, Jian
    Zhang, Hanlei
    Ma, Wenjing
    Xu, Gangyan
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [6] Efficient and Lightweight Framework for Real-Time Ore Image Segmentation Based on Deep Learning
    Sun, Guodong
    Huang, Delong
    Cheng, Le
    Jia, Junjie
    Xiong, Chenyun
    Zhang, Yang
    MINERALS, 2022, 12 (05)
  • [7] Real-time crash risk prediction in freeway tunnels considering features interaction and unobserved heterogeneity: A two-stage deep learning modeling framework
    Jin, Jieling
    Huang, Helai
    Yuan, Chen
    Li, Ye
    Zou, Guoqing
    Xue, Hongli
    ANALYTIC METHODS IN ACCIDENT RESEARCH, 2023, 40
  • [8] Automatic classification of distal radius fracture using a two-stage ensemble deep learning framework
    Min, Hang
    Rabi, Yousef
    Wadhawan, Ashish
    Bourgeat, Pierrick
    Dowling, Jason
    White, Jordy
    Tchernegovski, Ayden
    Formanek, Blake
    Schuetz, Michael
    Mitchell, Gary
    Williamson, Frances
    Hacking, Craig
    Tetsworth, Kevin
    Schmutz, Beat
    PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE, 2023, 46 (02) : 877 - 886
  • [9] Automatic classification of distal radius fracture using a two-stage ensemble deep learning framework
    Hang Min
    Yousef Rabi
    Ashish Wadhawan
    Pierrick Bourgeat
    Jason Dowling
    Jordy White
    Ayden Tchernegovski
    Blake Formanek
    Michael Schuetz
    Gary Mitchell
    Frances Williamson
    Craig Hacking
    Kevin Tetsworth
    Beat Schmutz
    Physical and Engineering Sciences in Medicine, 2023, 46 : 877 - 886
  • [10] A lightweight deep learning model for real-time face recognition
    Deng, Zong-Yue
    Chiang, Hsin-Han
    Kang, Li-Wei
    Li, Hsiao-Chi
    IET IMAGE PROCESSING, 2023, 17 (13) : 3869 - 3883