Ground object classification based on height-aware multi-scale graph convolution network

被引:0
|
作者
Wen P. [1 ,2 ]
Cheng Y. [1 ]
Wang P. [1 ]
Zhao M. [1 ]
Zhang B. [1 ,3 ]
机构
[1] Information and Navigation College, Air Force Engineering University, Xi’an
[2] PLA 93575, Chengde
[3] PLA 93897, Xi’an
基金
中国国家自然科学基金;
关键词
deep learning; graph convolution; height-aware; image processing; point cloud classification;
D O I
10.13700/j.bh.1001-5965.2021.0434
中图分类号
学科分类号
摘要
The point cloud acquired by airborne LiDAR has the complex characteristics of uneven distribution of categories and large differences in sample elevation. Existing methods are difficult to fully identify fine-grained local structures. This paper proposes an end-to-end network for airborne LiDAR point cloud classification after employing stacked multi-layer edge convolution operators to simultaneously extract local and global information. It also introduces elevation attention weights as a supplement to feature extraction. First, the original point cloud is divided into sub-blocks and sampled to a fixed number of points. Then the multi-scale edge convolution operator is used to extract multi-scale local-global features which are merged thereafter, at the same time, the height-aware module is used to generate attention weights and applied to the feature extraction network. Finally, the improved focus loss function is used to further solve the problem of uneven distribution of categories and complete the classification. The standard test data set provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) was used to verify the proposed method. Overall, 85.9% of the classifications were accurate. The single-category classification accuracy, especially the roof, was increased by 4.6% than the best result published in the ISPRS competition. The research results have reference significance for practical applications and network design optimization. © 2023 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:1471 / 1478
页数:7
相关论文
共 21 条
  • [1] ZERMAS D, IZZAT I, PAPANIKOLOPOULOS N., Fast segmentation of 3D point clouds: A paradigm on LiDAR data for autonomous vehicle applications, 2017 IEEE International Conference on Robotics and Automation, pp. 5067-5073, (2017)
  • [2] YANG B S, HUANG R G, LI J P, Et al., Automated reconstruction of building LoDs from airborne LiDAR point clouds using an improved morphological scale space, Remote Sensing, 9, 1, pp. 14-27, (2017)
  • [3] POLEWSKI P, YAO W, HEURICH M, Et al., Detection of fallen trees in ALS point clouds using a Normalized Cut approach trained by simulation, ISPRS Journal of Photogrammetry and Remote Sensing, 105, pp. 252-271, (2015)
  • [4] ENE L T, NAESSET E, GOBAKKEN T, Et al., Large-scale estimation of change in aboveground biomass in miombo woodlands using airborne laser scanning and national forest inventory data, Remote Sensing of Environment, 188, pp. 106-117, (2017)
  • [5] TCHAPMI L, CHOY C, ARMENI I, Et al., SEGCloud: Semantic segmentation of 3D point clouds, 2017 International Conference on 3D Vision, pp. 537-547, (2018)
  • [6] HU Q, YANG B, XIE L, Et al., RandLA-Net: Efficient semantic segmentation of large-scale point clouds, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2020)
  • [7] LI Y Y, BU R, SUN M, Et al., PointCNN: Convolution on χ-transformed points, Advances in Neural Information Processing Systems, pp. 820-830, (2018)
  • [8] COLGAN M, BALDECK C, FERET J B, Et al., Mapping savanna tree species at ecosystem scales using support vector machine classification and BRDF correction on airborne hyperspectral and LiDAR data, Remote Sensing, 4, 11, pp. 3462-3480, (2012)
  • [9] WANG C S, SHU Q Q, WANG X Y, Et al., A random forest classifier based on pixel comparison features for urban LiDAR data, ISPRS Journal of Photogrammetry and Remote Sensing, 148, pp. 75-86, (2019)
  • [10] NIEMEYER J, ROTTENSTEINER F, SOERGEL U., Contextual classification of LIDAR data and building object detection in urban areas, ISPRS Journal of Photogrammetry and Remote Sensing, 87, pp. 152-165, (2014)