Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation

被引:62
|
作者
Muresan, Mircea Paul [1 ]
Giosan, Ion [1 ]
Nedevschi, Sergiu [1 ]
机构
[1] Tech Univ Cluj Napoca, Comp Sci Dept, 28 Memorandumului St, Cluj Napoca 400114, Romania
关键词
data association; multi-object tracking; sensor fusion; motion compensation; neural networks; INFORMATION FUSION; FRAMEWORK;
D O I
10.3390/s20041110
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation of a super-sensor. The result of the data aggregation may end up with errors such as false detections, misplaced object cuboids or an incorrect number of objects in the scene. The stabilization and validation process is focused on mitigating these problems. The current paper proposes four contributions for solving the stabilization and validation task, for autonomous vehicles, using the following sensors: trifocal camera, fisheye camera, long-range RADAR (Radio detection and ranging), and 4-layer and 16-layer LIDARs (Light Detection and Ranging). We propose two original data association methods used in the sensor fusion and tracking processes. The first data association algorithm is created for tracking LIDAR objects and combines multiple appearance and motion features in order to exploit the available information for road objects. The second novel data association algorithm is designed for trifocal camera objects and has the objective of finding measurement correspondences to sensor fused objects such that the super-sensor data are enriched by adding the semantic class information. The implemented trifocal object association solution uses a novel polar association scheme combined with a decision tree to find the best hypothesis-measurement correlations. Another contribution we propose for stabilizing object position and unpredictable behavior of road objects, provided by multiple types of complementary sensors, is the use of a fusion approach based on the Unscented Kalman Filter and a single-layer perceptron. The last novel contribution is related to the validation of the 3D object position, which is solved using a fuzzy logic technique combined with a semantic segmentation image. The proposed algorithms have a real-time performance, achieving a cumulative running time of 90 ms, and have been evaluated using ground truth data extracted from a high-precision GPS (global positioning system) with 2 cm accuracy, obtaining an average error of 0.8 m.
引用
收藏
页数:33
相关论文
共 50 条
  • [1] Sensor Fusion for Joint 3D Object Detection and Semantic Segmentation
    Meyer, Gregory P.
    Charland, Jake
    Hegde, Darshan
    Laddha, Ankit
    Vallespi-Gonzalez, Carlos
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 1230 - 1237
  • [2] Multimodal Fusion and Data Augmentation for 3D Semantic Segmentation
    Dong He
    Abid, Furqan
    Kim, Jong-Hwan
    2022 22ND INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2022), 2022, : 1143 - 1148
  • [3] Deep Sensor Fusion with Pyramid Fusion Networks for 3D Semantic Segmentation
    Schieber, Hannah
    Duerr, Fabian
    Schoen, Torsten
    Beyerer, Juergen
    2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 375 - 381
  • [4] Decoupled Iterative Deep Sensor Fusion for 3D Semantic Segmentation
    Duerr, Fabian
    Weigel, Hendrik
    Beyerer, Juergen
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2021, 15 (03) : 293 - 312
  • [5] Iterative Deep Fusion for 3D Semantic Segmentation
    Duerr, Fabian
    Weigel, Hendrik
    Maehlisch, Mirko
    Beyerer, Juergen
    2020 FOURTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2020), 2020, : 391 - 397
  • [6] Semantic object segmentation of 3D scenes using color and shape compatibility
    Yazdi, M
    Zaccarin, A
    6TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL IX, PROCEEDINGS: IMAGE, ACOUSTIC, SPEECH AND SIGNAL PROCESSING II, 2002, : 268 - 272
  • [7] SAMFusion: Sensor-Adaptive Multimodal Fusion for 3D Object Detection in Adverse Weather
    Palladin, Edoardo
    Dietze, Roland
    Narayanan, Praveen
    Bijelic, Mario
    Heide, Felix
    COMPUTER VISION - ECCV 2024, PT LXI, 2025, 15119 : 484 - 503
  • [8] Semantic Feature Mining for 3D Object Classification and Segmentation
    Lu, Weihao
    Zhao, Dezong
    Premebida, Cristiano
    Chen, Wen-Hua
    Tian, Daxin
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13539 - 13545
  • [9] Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation
    Zhuang, Zhuangwei
    Li, Rong
    Jia, Kui
    Wang, Qicheng
    Li, Yuanqing
    Tan, Mingkui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16260 - 16270
  • [10] Object 3D position estimation based on instance segmentation
    Liu Chang-ji
    Hao Zhi-cheng
    Yang Jin-cheng
    Zhu Ming
    Nie Hai-tao
    CHINESE JOURNAL OF LIQUID CRYSTALS AND DISPLAYS, 2021, 36 (11) : 1535 - 1544