Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation

被引:62
|
作者
Muresan, Mircea Paul [1 ]
Giosan, Ion [1 ]
Nedevschi, Sergiu [1 ]
机构
[1] Tech Univ Cluj Napoca, Comp Sci Dept, 28 Memorandumului St, Cluj Napoca 400114, Romania
关键词
data association; multi-object tracking; sensor fusion; motion compensation; neural networks; INFORMATION FUSION; FRAMEWORK;
D O I
10.3390/s20041110
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis-measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m.
引用
收藏
页数:33
相关论文
共 50 条
  • [31] RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving
    El Madawi, Khaled
    Rashed, Hazem
    El Sallab, Ahmad
    Nasr, Omar
    Kamel, Hanan
    Yogamani, Senthil
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 7 - 12
  • [32] Multi-modal-fusion-based 3D semantic segmentation algorithm
    Chao Q.
    Zhao Y.
    Liu S.
    Hongwai yu Jiguang Gongcheng/Infrared and Laser Engineering, 2024, 53 (05):
  • [33] Seeing Beyond Cancer: Multi-Institutional Validation of Object Localization and 3D Semantic Segmentation using Deep Learning for Breast MRI
    Pekis, Arda
    Kannan, Vignesh
    Kaklamanos, Evandros
    Antony, Anu
    Patel, Snehal
    Earnest, Tyler
    COMPUTER-AIDED DIAGNOSIS, MEDICAL IMAGING 2024, 2024, 12927
  • [34] A Depth Image Fusion Network for 3D Point Cloud Semantic Segmentation
    Wang, Zhou
    Jia, Zixi
    Lyu, Ao
    Wang, Yating
    Sun, Changsheng
    Liu, Yongxin
    2019 9TH IEEE ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (IEEE-CYBER 2019), 2019, : 849 - 853
  • [35] Seeing Beyond Cancer: Multi-Institutional Validation of Object Localization and 3D Semantic Segmentation using Deep Learning for Breast MRI
    Pekis, Arda
    Kannan, Vignesh
    Kaklamanos, Evandros
    Antony, Anu
    Patel, Snehal
    Earnest, Tyler
    Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 2024, 12927
  • [36] Multi -class multimodal semantic segmentation with an improved 3D fully convolutional networks
    Jiang, Han
    Guo, Yanrong
    NEUROCOMPUTING, 2020, 391 : 220 - 226
  • [37] A Point-Wise LiDAR and Image Multimodal Fusion Network (PMNet) for Aerial Point Cloud 3D Semantic Segmentation
    Poliyapram, Vinayaraj
    Wang, Weimin
    Nakamura, Ryosuke
    REMOTE SENSING, 2019, 11 (24)
  • [38] 3D Semantic Segmentation of Modular Furniture using rjMCMC
    Badami, Ishrat
    Tom, Manu
    Mathias, Markus
    Leibe, Bastian
    2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2017), 2017, : 64 - 72
  • [39] Active Segmentation in 3D using Kinect Sensor
    Tomori, Zoltan
    Gargalik, Radoslav
    Hrmo, Igor
    WSCG'2012, CONFERENCE PROCEEDINGS, PTS I & II, 2012, : 163 - 167
  • [40] Multimodal Neural Networks: RGB-D for Semantic Segmentation and Object Detection
    Schneider, Lukas
    Jasch, Manuel
    Froehlich, Bjoern
    Weber, Thomas
    Franke, Uwe
    Pollefeys, Marc
    Raetsch, Matthias
    IMAGE ANALYSIS, SCIA 2017, PT I, 2017, 10269 : 98 - 109