Martian terrain feature extraction method based on unsupervised contrastive learning

被引:0
|
作者
Yang B. [1 ]
Wei X. [1 ]
Yu H. [1 ]
Liu C. [1 ]
机构
[1] School of Astronautics, Beihang University, Beijing
关键词
contrastive learning; deep learning; feature extraction; Martian terrain; unsupervised;
D O I
10.13700/j.bh.1001-5965.2022.0525
中图分类号
学科分类号
摘要
Intelligent surface terrain recognition of Martian is significant for the autonomous exploration of Mars rovers. At present, the methods used for feature extraction of Martian terrain images are mainly divided into two categories: traditional shallow visual feature extraction and deep feature extraction based on supervised learning. However, these methods tend to lose image information and require a large amount of labeled data, which are key problems to be solved. A Martian terrain feature recognition method based on unsupervised contrastive learning was proposed. By establishing the image dictionary dataset, a single image was compared with other images in the dictionary dataset by using two groups of neural networks, namely “query” and “encode” . Then, the similarity function was used as the loss function to train the network, so as to realize the feature recognition of Martian terrain images. The proposed method could also recognize new types of terrain images outside the training dataset and showed superior performance in subsequent recognition and classification tasks. Simulation results show that the recognition accuracy of the proposed method is 85.4%, and the recognition accuracy of new terrain images is 84.5%. © 2024 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:1842 / 1849
页数:7
相关论文
共 22 条
  • [1] OJHA L, WILHELM M, MURCHIE S, Et al., Spectral evidence for hydrated salts in recurring slope lineae on Mars, Nature Geoscience, 8, 11, pp. 829-832, (2015)
  • [2] ZHANG H H, LIANG J, HUANG X Y, Et al., Autonomous hazard avoidance control for Chang’E-3 soft landing, Scientia Sinica (Technologica), 44, 6, pp. 559-568, (2014)
  • [3] LEE S J, CHEN T L, YU L, Et al., Image classification based on the boost convolutional neural network, IEEE Access, 6, pp. 12755-12768, (2018)
  • [4] JIAO L C, ZHANG F, LIU F, Et al., A survey of deep learning-based object detection, IEEE Access, 7, pp. 128837-128868, (2019)
  • [5] JU J, JUNG H, OH Y, Et al., Extending contrastive learning to unsupervised coreset selection, IEEE Access, 10, pp. 7704-7715, (2022)
  • [6] SUN Q R, LIU Y Y, CHUA T S, Et al., Meta-transfer learning for few-shot learning, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 403-412, (2019)
  • [7] SHORTEN C, KHOSHGOFTAAR T M., A survey on image data augmentation for deep learning
  • [8] BOWLES C, CHEN L, GUERRERO R, Et al., GAN Augmentation: Augmenting training data using generative adversarial networks
  • [9] FRID-ADAR M, BEN-COHEN A, AMER R, Et al., Improving the segmentation of anatomical structures in chest radiographs using U-net with an ImageNet pre-trained encoder, Proceedings of the International Workshop on Reconstruction and Analysis of Moving Body Organs, International Workshop on Breast Image Analysis, International Workshop on Thoracic Image Analysis, pp. 159-168, (2018)
  • [10] CUI B G, CHEN X, LU Y., Semantic segmentation of remote sensing images using transfer learning and deep convolutional neural network with dense connection, IEEE Access, 8, pp. 116744-116755, (2020)