Deep Multi-modal Object Detection for Autonomous Driving

被引:4
|
作者
Ennajar, Amal [1 ]
Khouja, Nadia [1 ]
Boutteau, Remi [2 ]
Tlili, Fethi [1 ]
机构
[1] Sup Com, Grescom Lab, Tunis, Tunisia
[2] Normandie Univ, UNIROUEN, UNILEHAVRE, INSA Rouen,LITIS, F-76000 Rouen, France
关键词
multi-modality; object detection; deep learning; autonomous driving; simulators; datasets; sensors fusion; POINT CLOUD;
D O I
10.1109/SSD52085.2021.9429355
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Robust perception in autonomous vehicles is a huge challenge that is the main tool for detecting and tracking the different kinds of objects around the vehicle. The aim is to reach the capability of the human level, which is frequently realized by taking utility of several sensing modalities. This lead to make the sensor combination a main part of the recognition system. In this paper, we present methods that have been proposed in the literature for the different deep multi-modal perception techniques. We focus on works dealing with the combination of radar information with other sensors. The radar data are in fact very important mainly when weather conditions and precipitation affect the quality of the data. In this case, it is crucial to have at least some sensors that are immunized against different weather conditions and radar is one of them.
引用
收藏
页码:7 / 11
页数:5
相关论文
共 50 条
  • [1] Leveraging Uncertainties for Deep Multi-modal Object Detection in Autonomous Driving
    Feng, Di
    Cao, Yifan
    Rosenbaum, Lars
    Timm, Fabian
    Dietmayer, Klaus
    [J]. 2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 871 - 878
  • [2] Improving Deep Multi-modal 3D Object Detection for Autonomous Driving
    Khamsehashari, Razieh
    Schill, Kerstin
    [J]. 2021 7TH INTERNATIONAL CONFERENCE ON AUTOMATION, ROBOTICS AND APPLICATIONS (ICARA 2021), 2021, : 263 - 267
  • [3] Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges
    Feng, Di
    Haase-Schutz, Christian
    Rosenbaum, Lars
    Hertlein, Heinz
    Glaser, Claudius
    Timm, Fabian
    Wiesbeck, Werner
    Dietmayer, Klaus
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (03) : 1341 - 1360
  • [4] Evaluation of Measurement Space Representations of Deep Multi-Modal Object Detection for Extended Object Tracking in Autonomous Driving
    Giefer, Lino Antoni
    Khamsehashari, Razieh
    Schill, Kerstin
    [J]. 2020 IEEE 3RD CONNECTED AND AUTOMATED VEHICLES SYMPOSIUM (CAVS), 2020,
  • [5] Multi-Modal 3D Object Detection in Autonomous Driving: A Survey
    Wang, Yingjie
    Mao, Qiuyu
    Zhu, Hanqi
    Deng, Jiajun
    Zhang, Yu
    Ji, Jianmin
    Li, Houqiang
    Zhang, Yanyong
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (08) : 2122 - 2152
  • [6] Multi-Modal 3D Object Detection in Autonomous Driving: A Survey
    Yingjie Wang
    Qiuyu Mao
    Hanqi Zhu
    Jiajun Deng
    Yu Zhang
    Jianmin Ji
    Houqiang Li
    Yanyong Zhang
    [J]. International Journal of Computer Vision, 2023, 131 : 2122 - 2152
  • [7] Multi-Modal 3D Object Detection in Autonomous Driving: A Survey and Taxonomy
    Wang, Li
    Zhang, Xinyu
    Song, Ziying
    Bi, Jiangfeng
    Zhang, Guoxin
    Wei, Haiyue
    Tang, Liyao
    Yang, Lei
    Li, Jun
    Jia, Caiyan
    Zhao, Lijun
    [J]. IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (07): : 3781 - 3798
  • [8] Multi-scale multi-modal fusion for object detection in autonomous driving based on selective kernel
    Gao, Xin
    Zhang, Guoying
    Xiong, Yijin
    [J]. MEASUREMENT, 2022, 194
  • [9] MENet: Multi-Modal Mapping Enhancement Network for 3D Object Detection in Autonomous Driving
    Liu, Moyun
    Chen, Youping
    Xie, Jingming
    Zhu, Yijie
    Zhang, Yang
    Yao, Lei
    Bing, Zhenshan
    Zhuang, Genghang
    Huang, Kai
    Zhou, Joey Tianyi
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [10] Multi-modal Experts Network for Autonomous Driving
    Fang, Shihong
    Choromanska, Anna
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 6439 - 6445