Exploiting Multi-Layer Grid Maps for Surround-View Semantic Segmentation of Sparse LiDAR Data

被引:0
|
作者
Bieder, Frank [1 ]
Wirges, Sascha [2 ]
Janosovits, Johannes [1 ]
Richter, Sven [1 ]
Wang, Zheyuan [1 ]
Stiller, Christoph [1 ]
机构
[1] Karlsruhe Inst Technol, Inst Measurement & Control Syst, Karlsruhe, Germany
[2] FZI Res Ctr Informat Technol, Karlsruhe, Germany
来源
2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV) | 2020年
关键词
D O I
10.1109/iv47402.2020.9304848
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we consider the transformation of laser range measurements into a top-view grid map representation to approach the task of LiDAR-only semantic segmentation. Since the recent publication of the SemanticKITTI data set, researchers are now able to study semantic segmentation of urban LiDAR sequences based on a reasonable amount of data. While other approaches propose to directly learn on the 3D point clouds, we are exploiting a grid map framework to extract relevant information and represent them by using multi-layer grid maps. This representation allows us to use well-studied deep learning architectures from the image domain to predict a dense semantic grid map using only the sparse input data of a single LiDAR scan. We compare single-layer and multi-layer approaches and demonstrate the benefit of a multi-layer grid map input. Since the grid map representation allows us to predict a dense, 360ffi semantic environment representation, we further develop a method to combine the semantic information from multiple scans and create dense ground truth grids. This method allows us to evaluate and compare the performance of our models not only based on grid cells with a detection, but on the full visible measurement range.
引用
收藏
页码:1892 / 1898
页数:7
相关论文
共 47 条
  • [1] Fusion of sequential LiDAR measurements for semantic segmentation of multi-layer grid maps
    Bieder, Frank
    Wirges, Sascha
    Richter, Sven
    Stiller, Christoph
    TM-TECHNISCHES MESSEN, 2021, 88 (06) : 352 - 360
  • [2] Surround-View Fisheye Camera Viewpoint Augmentation for Image Semantic Segmentation
    Cho, Jieun
    Lee, Jonghyun
    Ha, Jinsu
    Resende, Paulo
    Bradai, Benazouz
    Jo, Kichun
    IEEE ACCESS, 2023, 11 : 48480 - 48492
  • [3] A Comparison of Spherical Neural Networks for Surround-View Fisheye Image Semantic Segmentation
    Manzoor, Anam
    Mohandas, Reenu
    Scanlan, Anthony
    Grua, Eoin Martino
    Collins, Fiachra
    Sistu, Ganesh
    Eising, Ciaran
    IEEE OPEN JOURNAL OF VEHICULAR TECHNOLOGY, 2025, 6 : 717 - 740
  • [4] Multi-layer Adaptive Feature Fusion for Semantic Segmentation
    Yizhen Chen
    Haifeng Hu
    Neural Processing Letters, 2020, 51 : 1081 - 1092
  • [5] Multi-layer Adaptive Feature Fusion for Semantic Segmentation
    Chen, Yizhen
    Hu, Haifeng
    NEURAL PROCESSING LETTERS, 2020, 51 (02) : 1081 - 1092
  • [6] MOFISSLAM: A Multi-Object Semantic SLAM System With Front-View, Inertial, and Surround-View Sensors for Indoor Parking
    Shao, Xuan
    Zhang, Lin
    Zhang, Tianjun
    Shen, Ying
    Zhou, Yicong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4788 - 4803
  • [7] Improving Lidar-Based Semantic Segmentation of Top-View Grid Maps by Learning Features in Complementary Representations
    Bieder, Frank
    Link, Maximilian
    Romanski, Simon
    Hu, Haohao
    Stiller, Christoph
    2021 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2021, : 64 - 70
  • [8] CPF-UNet: A Dual-Path U-Net Structure for Semantic Segmentation of Panoramic Surround-View Images
    Sun, Qiqing
    Qu, Feng
    APPLIED SCIENCES-BASEL, 2024, 14 (13):
  • [9] Capturing Object Detection Uncertainty in Multi-Layer Grid Maps
    Wirges, Sascha
    Reith-Braun, Marcel
    Lauer, Martin
    Stiller, Christoph
    2019 30TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV19), 2019, : 1520 - 1526
  • [10] MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks
    Gupta, Puneet
    Rahtu, Esa
    PATTERN RECOGNITION, DAGM GCPR 2019, 2019, 11824 : 401 - 413