GSIP: Green Semantic Segmentation of Large-Scale Indoor Point Clouds

被引:8
|
作者
Zhang, Min [1 ]
Kadam, Pranav [1 ]
Liu, Shan [2 ]
Kuo, C. -C. Jay [1 ]
机构
[1] Univ Southern Calif, Viterbi Sch Engn, Ming Hsieh Dept Elect & Comp Engn, Los Angeles, CA 90007 USA
[2] Tencent Amer, Tencent Media Lab, 2747 Pk Blvd, Palo Alto, CA 94306 USA
关键词
Point cloud; Semantic segmentation; Indoor scene understanding; Green learning; unsupervised learning; HISTOGRAMS;
D O I
10.1016/j.patrec.2022.10.014
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An efficient solution to semantic segmentation of large-scale indoor scene point clouds is proposed in this work. It is named GSIP (Green Segmentation of Indoor Point clouds) and its performance is evaluated on a representative large-scale benchmark - the Stanford 3D Indoor Segmentation (S3DIS) dataset. GSIP has two novel components: 1) a room-style data pre-processing method that selects a proper subset of points for further processing, and 2) a new feature extractor which is extended from PointHop. For the former, sampled points of each room form an input unit. For the latter, the weaknesses of PointHop's feature extraction when extending it to large-scale point clouds are identified and fixed with a simpler processing pipeline. As compared with PointNet, which is a pioneering deep-learning-based solution, GSIP is green since it has significantly lower computational complexity and a much smaller model size. Furthermore, experiments show that GSIP outperforms PointNet in segmentation performance for the S3DIS dataset.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:9 / 15
页数:7
相关论文
共 50 条
  • [1] Semantic segmentation of large-scale point clouds with neighborhood uncertainty
    Bao, Yong
    Wen, Haibiao
    Zhang, Baoqing
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (21) : 60949 - 60964
  • [2] Continuous Mapping Convolution for Large-Scale Point Clouds Semantic Segmentation
    Yan, Kunping
    Hu, Qingyong
    Wang, Hanyun
    Huang, Xiaohong
    Li, Li
    Ji, Song
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [3] Learning Semantic Segmentation of Large-Scale Point Clouds With Random Sampling
    Hu, Qingyong
    Yang, Bo
    Xie, Linhai
    Rosa, Stefano
    Guo, Yulan
    Wang, Zhihua
    Trigoni, Niki
    Markham, Andrew
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 8338 - 8354
  • [4] LessNet: Lightweight and efficient semantic segmentation for large-scale point clouds
    Feng, Guoqiang
    Li, Weilong
    Zhao, Xiaolin
    Yang, Xuemeng
    Kong, Xin
    Huang, TianXin
    Cui, Jinhao
    [J]. IET CYBER-SYSTEMS AND ROBOTICS, 2022, 4 (02) : 107 - 115
  • [5] BushNet: Effective semantic segmentation of bush in large-scale point clouds
    Wei, Hejun
    Xu, Enyong
    Zhang, Jinlai
    Meng, Yanmei
    Wei, Jin
    Dong, Zhen
    Li, Zhengqiang
    [J]. COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2022, 193
  • [6] EDGE-CONVOLUTION POINT NET FOR SEMANTIC SEGMENTATION OF LARGE-SCALE POINT CLOUDS
    Contreras, Jhonatan
    Denzler, Joachim
    [J]. 2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, : 5236 - 5239
  • [7] Building semantic segmentation from large-scale point clouds via primitive recognition
    Romanengo, Chiara
    Cabiddu, Daniela
    Pittaluga, Simone
    Mortara, Michela
    [J]. Graphical Models, 2024, 136
  • [8] Semantic segmentation of large-scale point clouds based on dilated nearest neighbors graph
    Lei Wang
    Jiaji Wu
    Xunyu Liu
    Xiaoliang Ma
    Jun Cheng
    [J]. Complex & Intelligent Systems, 2022, 8 : 3833 - 3845
  • [9] Semantic segmentation of large-scale point clouds by integrating attention mechanisms and transformer models
    Yuan, Tiebiao
    Yu, Yangyang
    Wang, Xiaolong
    [J]. IMAGE AND VISION COMPUTING, 2024, 146
  • [10] Semantic segmentation of large-scale point clouds based on dilated nearest neighbors graph
    Wang, Lei
    Wu, Jiaji
    Liu, Xunyu
    Ma, Xiaoliang
    Cheng, Jun
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (05) : 3833 - 3845