Long-range attention classification for substation point cloud

被引:0
|
作者
Li, Da [1 ]
Zhao, Hui [2 ]
Yan, Xingyu [1 ]
Zhao, Liang [2 ]
Cao, Hui [1 ]
机构
[1] Xi An Jiao Tong Univ, Xian 710049, Peoples R China
[2] Huaneng Laiwu Power Stn, Jinan 271100, Peoples R China
关键词
Substation point cloud; Classification; Shuffle channel attention; Dilated channel attention; CONVOLUTIONAL NEURAL-NETWORKS; SEGMENTATION;
D O I
10.1016/j.neucom.2024.128435
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Point cloud classification for substation is challenging due to the occluding layout as well as the complexity of methods. Most previous methods trade inference speed for precision by building complex extractors. To alleviate the paradox of performance and complexity trade-off, this paper proposes the lightweight long-range attention classification for substation point cloud, including shuffle channel attention(SCA) and dilated channel attention(DCA). First, SCA captures local cross-channel interaction via 1D convolution and global interaction via channel shuffling, shown promising performance. Second, to further reduce the amount of computation involved in shuffling, we propose a more elegant method DCA by resizing the 1D vector after pooling into 2D feature map. Note that proposed DCA are implemented by just 2D convolution, determining the coverage of cross-channel interaction, with a significantly higher inference speed. Furthermore, we develop a heuristic algorithm to adaptively determine parameters like kernel size, shuffle group and size of 2D feature map. Besides ModelNet40 and ScanObjectNN(PB_T50_RS), this paper also selects ten of main objects in substation for training and testing. Finally, experimental results suggest that proposed methods bring notable performance gain, nearly 1%. DCA reaches 93.801% and just increases 0.0003M parameters but 15.5% faster than SCA. We also make an attempt to simplify proposed methods via reducing the coding dimensions and coding blocks. And simplified DCA achieves 93.4% performance with almost quarter parameters and 112% faster, guaranteeing both efficiency and effectiveness. A simplified version is used to achieve the highest accuracy of 85.115% and 83.304% on ScanObjectNN, with the highest improvements of 1.735% and 2.391%. This paper also conducts robustness test where different proportions of points are missing. Results show that when 93.75% missing, the accuracy only decreases by 4.093%, and can still reach 89%, which is significantly ahead of other methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] FarNet: An Attention-Aggregation Network for Long-Range Rail Track Point Cloud Segmentation
    Wang, Zhangyu
    Yu, Guizhen
    Chen, Peng
    Zhou, Bin
    Yang, Songyue
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) : 13118 - 13126
  • [2] Predicting Long-Range Performance of Substation Operators
    Worbois, G. M.
    JOURNAL OF APPLIED PSYCHOLOGY, 1951, 35 (01) : 15 - 19
  • [3] Hybrid Attention-based Transformer for Long-range Document Classification
    Qin, Ruyu
    Huang, Min
    Liu, Jiawei
    Miao, Qinghai
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [4] POINT-DEFECTS AND LONG-RANGE ORDER
    ANDREEV, AF
    JETP LETTERS, 1995, 62 (02) : 136 - 141
  • [5] Efficient long-range convolutions for point clouds
    Peng, Yifan
    Lin, Lin
    Ying, Lexing
    Zepeda-Nunez, Leonardo
    JOURNAL OF COMPUTATIONAL PHYSICS, 2023, 473
  • [6] Long-range Dismount Activity Classification (LODAC)
    Garagic, Denis
    Peskoe, Jacob
    Liu, Fang
    Cuevas, Manuel
    Freeman, Andrew M.
    Rhodes, Bradley J.
    GROUND/AIR MULTISENSOR INTEROPERABILITY, INTEGRATION, AND NETWORKING FOR PERSISTENT ISR V, 2014, 9079
  • [7] Dual Attention Network for Point Cloud Classification and Segmentation
    Zhou, Ce
    Xie, Yuesong
    He, Xindong
    Yuan, Ting
    Ling, Qiang
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 6482 - 6486
  • [8] Long-range Sequence Modeling with Predictable Sparse Attention
    Zhuang, Yimeng
    Zhang, Jing
    Tu, Mei
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 234 - 243
  • [9] Focal Attention for Long-Range Interactions in Vision Transformers
    Yang, Jianwei
    Li, Chunyuan
    Zhang, Pengchuan
    Dai, Xiyang
    Xiao, Bin
    Yuan, Lu
    Gao, Jianfeng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Supervoxel Attention Graphs for Long-Range Video Modeling
    Wang, Yang
    Bertasius, Gedas
    Oh, Tae-Hyun
    Gupta, Abhinav
    Hoai, Minh
    Torresani, Lorenzo
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 155 - 166