Fusion of Semantic Segmentation Models for Vehicle Perception Tasks

被引:0
|
作者
Giorgi, Danut-Vasile [1 ]
Dezert, Jean [2 ]
Josso-Laurain, Thomas [1 ]
Devanne, Maxime [1 ]
Lauffenburger, Jean-Philippe [1 ]
机构
[1] Univ Haute Alsace, IRIMAS UR7499, Mulhouse, France
[2] French Aerosp Lab, ONERA, Palaiseau, France
关键词
segmentation models; vehicle perception; belief functions; PCR6 fusion rule; entropy;
D O I
10.23919/FUSION59988.2024.10706336
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In self-navigation problems for autonomous vehicles, the variability of environmental conditions, complex scenes with vehicles and pedestrians, and the high-dimensional or real-time nature of tasks make segmentation challenging. Sensor fusion can representatively improve performances. Thus, this work highlights a late fusion concept used for semantic segmentation tasks in such perception systems. It is based on two approaches for merging information coming from two neural networks, one trained for camera data and one for LiDAR frames. The first approach involves fusing probabilities along with calculating partial conflicts and redistributing data. The second technique focuses on making individual decisions based on sources and fusing them later with weighted Shannon entropies. The two segmentation models are trained and evaluated on a particular KITTI semantic dataset. In the realm of multi-class segmentation tasks, the two fusion techniques are compared and evaluated with illustrative examples. Intersection over union metric and quality of decision are computed to assess the performance of each methodology.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] MULTIMEDIA FUSION AT SEMANTIC LEVEL IN VEHICLE COOPERACTIVE PERCEPTION
    Xiao, Zhongyang
    Mo, Zhaobin
    Jiang, Kun
    Yang, Diange
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW 2018), 2018,
  • [2] MASNet: Road Semantic Segmentation Based on Multiscale Modality Fusion Perception
    Li, Xiaohang
    Zhou, Jianjiang
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 13
  • [3] Rainbow UDA: Combining Domain Adaptive Models for Semantic Segmentation Tasks
    Chao, Chen-Hao
    Cheng, Bo-Wun
    Wang, Tzu-Wen
    Liao, Huang-Ru
    Lee, Chun-Yi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12707 - 12713
  • [4] Data Fusion and Models Integration for Enhanced Semantic Segmentation in Remote Sensing
    Dong, Xiaorui
    Li, Jiansheng
    Chang, Qingfang
    Miao, Shufeng
    Wan, Hongxiang
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 : 7134 - 7151
  • [5] Fast Semantic Segmentation for Scene Perception
    Zhang, Xuetao
    Chen, Zhenxue
    Wu, Q. M. Jonathan
    Cai, Lei
    Lu, Dan
    Li, Xianming
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (02) : 1183 - 1192
  • [6] Modular Sensor Fusion for Semantic Segmentation
    Blum, Hermann
    Gawel, Abel
    Siegwart, Roland
    Cadena, Cesar
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 3670 - 3677
  • [7] Unsupervised Semantic Segmentation with Feature Fusion
    Zhu, Lifu
    Huang, Jing
    Ye, Shaoxiong
    2023 3RD ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS TECHNOLOGY AND COMPUTER SCIENCE, ACCTCS, 2023, : 162 - 167
  • [8] Gated Fully Fusion for Semantic Segmentation
    Li, Xiangtai
    Zhao, Houlong
    Han, Lei
    Tong, Yunhai
    Tan, Shaohua
    Yang, Kuiyuan
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11418 - 11425
  • [9] Efficient semantic segmentation with pyramidal fusion
    Orsic, Marin
    Segvic, Sinisa
    PATTERN RECOGNITION, 2021, 110
  • [10] Pyramid Fusion Transformer for Semantic Segmentation
    Qin, Zipeng
    Liu, Jianbo
    Zhang, Xiaolin
    Tian, Maoqing
    Zhou, Aojun
    Yi, Shuai
    Li, Hongsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9630 - 9643