Point cloud data provides rich three-dimensional spatial information. Accurate three-dimensional point cloud semantic segmentation algorithms enhance environmental understanding and perception, with wide-ranging applications in autonomous driving and scene analysis. However, Graph Neural Networks often struggle to retain semantic relationships among neighboring points during feature extraction, potentially leading to the omission of critical features during aggregation. To address these challenges, we propose a novel network, the Feature-Enhanced Residual Attention Network. This network includes an innovative graph convolution module, the Neighborhood-Enhanced Convolutional Aggregation Module, which utilizes K-Nearest Neighbor and Dilated K-Nearest Neighbor techniques to construct diverse dynamic graphs and aggregate features, thereby prioritizing essential information. This approach significantly enhances the expressiveness and generalization capabilities of the network. Additionally, we introduce a new spatial attention module designed to capture semantic relationships among points. Experimental results demonstrate that the Feature-Enhanced Residual Attention Network outperforms benchmark models, achieving an average intersection ratio of 61.3% and an overall accuracy of 86.7% on the Stanford Large-Scale Three-dimensional Indoor Spaces dataset, thereby significantly improving semantic segmentation performance. © 2024 Elsevier Ltd