Deep Multi-modal Object Detection for Autonomous Driving

被引:4
|
作者
Ennajar, Amal [1 ]
Khouja, Nadia [1 ]
Boutteau, Remi [2 ]
Tlili, Fethi [1 ]
机构
[1] Sup Com, Grescom Lab, Tunis, Tunisia
[2] Normandie Univ, UNIROUEN, UNILEHAVRE, INSA Rouen,LITIS, F-76000 Rouen, France
关键词
multi-modality; object detection; deep learning; autonomous driving; simulators; datasets; sensors fusion; POINT CLOUD;
D O I
10.1109/SSD52085.2021.9429355
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Robust perception in autonomous vehicles is a huge challenge that is the main tool for detecting and tracking the different kinds of objects around the vehicle. The aim is to reach the capability of the human level, which is frequently realized by taking utility of several sensing modalities. This lead to make the sensor combination a main part of the recognition system. In this paper, we present methods that have been proposed in the literature for the different deep multi-modal perception techniques. We focus on works dealing with the combination of radar information with other sensors. The radar data are in fact very important mainly when weather conditions and precipitation affect the quality of the data. In this case, it is crucial to have at least some sensors that are immunized against different weather conditions and radar is one of them.
引用
收藏
页码:7 / 11
页数:5
相关论文
共 50 条
  • [21] Virtual Multi-modal Object Detection and Classification with Deep Convolutional Neural Networks
    Mitsakos, Nikolaos
    Papadakis, Manos
    [J]. WAVELETS AND SPARSITY XVIII, 2019, 11138
  • [22] Towards Autonomous Driving: a Multi-Modal 360° Perception Proposal
    Beltran, Jorge
    Guindel, Carlos
    Cortes, Irene
    Barrera, Alejandro
    Astudillo, Armando
    Urdiale, Jesus
    Alvarez, Mario
    Bekka, Farid
    Milanes, Vicente
    Garcia, Fernando
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [23] Exploiting Multi-Modal Fusion for Urban Autonomous Driving Using Latent Deep Reinforcement Learning
    Khalil, Yasser H.
    Mouftah, Hussein T.
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (03) : 2921 - 2935
  • [24] Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving
    Wang, Li
    Zhang, Xinyu
    Li, Jun
    Xv, Baowei
    Fu, Rong
    Chen, Haifeng
    Yang, Lei
    Jin, Dafeng
    Zhao, Lijun
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (05) : 5628 - 5641
  • [25] Multi-Modal Sensor Fusion and Object Tracking for Autonomous Racing
    Karle, Phillip
    Fent, Felix
    Huch, Sebastian
    Sauerbeck, Florian
    Lienkamp, Markus
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (07): : 3871 - 3883
  • [26] Deep multi-scale and multi-modal fusion for 3D object detection
    Guo, Rui
    Li, Deng
    Han, Yahong
    [J]. PATTERN RECOGNITION LETTERS, 2021, 151 : 236 - 242
  • [27] Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
    Gomez, Jose L.
    Villalonga, Gabriel
    Lopez, Antonio M.
    [J]. SENSORS, 2021, 21 (09)
  • [28] Small Object Detection Technology Using Multi-Modal Data Based on Deep Learning
    Park, Chi-Won
    Seo, Yuri
    Sun, Teh-Jen
    Lee, Ga-Won
    Huh, Eui-Nam
    [J]. 2023 INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN, 2023, : 420 - 422
  • [29] ReCoAt: A Deep Learning-based Framework for Multi-Modal Motion Prediction in Autonomous Driving Application
    Huang, Zhiyu
    Mo, Xiaoyu
    Lv, Chen
    [J]. 2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 988 - 993
  • [30] Multi-modal object detection via transformer network
    Liu, Wenbing
    Wang, Haibo
    Gao, Quanxue
    Zhu, Zhaorui
    [J]. IET IMAGE PROCESSING, 2023, 17 (12) : 3541 - 3550