Semantic-Guided Transformer Network for Crop Classification in Hyperspectral Images

被引:0
|
作者
Pi, Weiqiang [1 ]
Zhang, Tao [2 ]
Wang, Rongyang [1 ]
Ma, Guowei [1 ]
Wang, Yong [1 ]
Du, Jianmin [2 ]
机构
[1] Huzhou Vocat & Tech Coll, Coll Intelligent Mfg & Elevator, Huzhou 313099, Peoples R China
[2] Inner Mongolia Agr Univ, Coll Mech & Elect Engn, Hohhot 010018, Peoples R China
基金
中国国家自然科学基金;
关键词
hyperspectral image classification; transformer; deep learning; attention mechanism; convolutional neural network; ATTENTION NETWORK;
D O I
10.3390/jimaging11020037
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
The hyperspectral remote sensing images of agricultural crops contain rich spectral information, which can provide important details about crop growth status, diseases, and pests. However, existing crop classification methods face several key limitations when processing hyperspectral remote sensing images, primarily in the following aspects. First, the complex background in the images. Various elements in the background may have similar spectral characteristics to the crops, and this spectral similarity makes the classification model susceptible to background interference, thus reducing classification accuracy. Second, the differences in crop scales increase the difficulty of feature extraction. In different image regions, the scale of crops can vary significantly, and traditional classification methods often struggle to effectively capture this information. Additionally, due to the limitations of spectral information, especially under multi-scale variation backgrounds, the extraction of crop information becomes even more challenging, leading to instability in the classification results. To address these issues, a semantic-guided transformer network (SGTN) is proposed, which aims to effectively overcome the limitations of these deep learning methods and improve crop classification accuracy and robustness. First, a multi-scale spatial-spectral information extraction (MSIE) module is designed that effectively handle the variations of crops at different scales in the image, thereby extracting richer and more accurate features, and reducing the impact of scale changes. Second, a semantic-guided attention (SGA) module is proposed, which enhances the model's sensitivity to crop semantic information, further reducing background interference and improving the accuracy of crop area recognition. By combining the MSIE and SGA modules, the SGTN can focus on the semantic features of crops at multiple scales, thus generating more accurate classification results. Finally, a two-stage feature extraction structure is employed to further optimize the extraction of crop semantic features and enhance classification accuracy. The results show that on the Indian Pines, Pavia University, and Salinas benchmark datasets, the overall accuracies of the proposed model are 98.24%, 98.34%, and 97.89%, respectively. Compared with other methods, the model achieves better classification accuracy and generalization performance. In the future, the SGTN is expected to be applied to more agricultural remote sensing tasks, such as crop disease detection and yield prediction, providing more reliable technical support for precision agriculture and agricultural monitoring.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] CONSISTENT AND MULTI-SCALE SCENE GRAPH TRANSFORMER FOR SEMANTIC-GUIDED IMAGE OUTPAINTING
    Yang, Chiao-An
    Wu, Meng-Lin
    Yeh, Raymond A.
    Wang, Yu-Chiang Frank
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 176 - 180
  • [22] Knowledge guided classification of airborne hyperspectral images with deep convolutional neural network
    Yu, Junchuan
    Li, Yichuan
    Zheng, Siqun
    Shao, Zhitao
    Liu, Rongyuan
    Ma, Yanni
    Gan, Fuping
    AOPC 2020: OPTICAL SPECTROSCOPY AND IMAGING; AND BIOMEDICAL OPTICS, 2020, 11566
  • [23] Improved semantic-guided network for skeleton-based action recognition
    Mansouri, Amine
    Bakir, Toufik
    Elzaar, Abdellah
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 104
  • [24] Semantic-Guided Generative Image Augmentation Method with Diffusion Models for Image Classification
    Li, Bohan
    Xu, Xiao
    Wang, Xinghao
    Hou, Yutai
    Feng, Yunlong
    Wang, Feng
    Zhang, Xuanliang
    Zhu, Qingfu
    Che, Wanxiang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3018 - 3027
  • [25] Semantic-Guided Novel Category Discovery
    Wang, Weishuai
    Lei, Ting
    Chen, Qingchao
    Liu, Yang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5607 - 5614
  • [26] A Lightweight Transformer Network for Hyperspectral Image Classification
    Zhang, Xuming
    Su, Yuanchao
    Gao, Lianru
    Bruzzone, Lorenzo
    Gu, Xingfa
    Tian, Qingjiu
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [27] Convolutional Transformer Network for Hyperspectral Image Classification
    Zhao, Zhengang
    Hu, Dan
    Wang, Hao
    Yu, Xianchuan
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [28] Multimodal Transformer Network for Hyperspectral and LiDAR Classification
    Zhang, Yiyan
    Xu, Shufang
    Hong, Danfeng
    Gao, Hongmin
    Zhang, Chenkai
    Bi, Meiqiao
    Li, Chenming
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [29] GINet:Graph interactive network with semantic-guided spatial refinement for salient object detection in optical remote sensing images
    Zhu, Chenwei
    Zhou, Xiaofei
    Bao, Liuxin
    Wang, Hongkui
    Wang, Shuai
    Zhu, Zunjie
    Yan, Chenggang
    Zhang, Jiyong
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 104
  • [30] Global semantic-guided graph attention network for Siamese tracking with ranking loss
    Zhang, Huanlong
    Qi, Rui
    Liu, Mengdan
    Song, Peipei
    Wang, Xin
    Zhong, Bineng
    DIGITAL SIGNAL PROCESSING, 2024, 149