Deep Multi-modal Object Detection for Autonomous Driving

被引:4
|
作者
Ennajar, Amal [1 ]
Khouja, Nadia [1 ]
Boutteau, Remi [2 ]
Tlili, Fethi [1 ]
机构
[1] Sup Com, Grescom Lab, Tunis, Tunisia
[2] Normandie Univ, UNIROUEN, UNILEHAVRE, INSA Rouen,LITIS, F-76000 Rouen, France
关键词
multi-modality; object detection; deep learning; autonomous driving; simulators; datasets; sensors fusion; POINT CLOUD;
D O I
10.1109/SSD52085.2021.9429355
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Robust perception in autonomous vehicles is a huge challenge that is the main tool for detecting and tracking the different kinds of objects around the vehicle. The aim is to reach the capability of the human level, which is frequently realized by taking utility of several sensing modalities. This lead to make the sensor combination a main part of the recognition system. In this paper, we present methods that have been proposed in the literature for the different deep multi-modal perception techniques. We focus on works dealing with the combination of radar information with other sensors. The radar data are in fact very important mainly when weather conditions and precipitation affect the quality of the data. In this case, it is crucial to have at least some sensors that are immunized against different weather conditions and radar is one of them.
引用
收藏
页码:7 / 11
页数:5
相关论文
共 50 条
  • [31] Multi-modal policy fusion for end-to-end autonomous driving
    Huang, Zhenbo
    Sun, Shiliang
    Zhao, Jing
    Mao, Liang
    [J]. INFORMATION FUSION, 2023, 98
  • [32] Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
    Prakash, Aditya
    Chitta, Kashyap
    Geiger, Andreas
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7073 - 7083
  • [33] Multi-modal of object trajectories
    Partsinevelos, P.
    [J]. JOURNAL OF SPATIAL SCIENCE, 2008, 53 (01) : 17 - 30
  • [34] Adversarial Cross-modal Domain Adaptation for Multi-modal Semantic Segmentation in Autonomous Driving
    Shi, Mengqi
    Cao, Haozhi
    Xie, Lihua
    Yang, Jianfei
    [J]. 2022 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV), 2022, : 850 - 855
  • [35] Multi-Modal Song Mood Detection with Deep Learning
    Pyrovolakis, Konstantinos
    Tzouveli, Paraskevi
    Stamou, Giorgos
    [J]. SENSORS, 2022, 22 (03)
  • [36] 3D Multiple Object Tracking with Multi-modal Fusion of Low-cost Sensors for Autonomous Driving
    Zhou, Taohua
    Jiang, Kun
    Wang, Sijia
    Shi, Yining
    Yang, Mengmeng
    Ren, Weining
    Yang, Diange
    [J]. 2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 1750 - 1757
  • [37] Multi-modal Hierarchical Transformer for Occupancy Flow Field Prediction in Autonomous Driving
    Liu, Haochen
    Huang, Zhiyu
    Lv, Chen
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 1449 - 1455
  • [38] Object detection in multi-modal images using genetic programming
    Bhanu, B
    Lin, YQ
    [J]. APPLIED SOFT COMPUTING, 2004, 4 (02) : 175 - 201
  • [39] Class-Agnostic Object Detection with Multi-modal Transformer
    Maaz, Muhammad
    Rasheed, Hanoona
    Khan, Salman
    Khan, Fahad Shahbaz
    Anwer, Rao Muhammad
    Yang, Ming-Hsuan
    [J]. COMPUTER VISION, ECCV 2022, PT X, 2022, 13670 : 512 - 531
  • [40] Human head detection using multi-modal object features
    Luo, Y
    Murphey, YL
    Khairallah, F
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 2134 - 2139