SMURF: Spatial Multi-Representation Fusion for 3D Object Detection With 4D Imaging Radar

被引:16
|
作者
Liu, Jianan [2 ]
Zhao, Qiuchi [1 ]
Xiong, Weiyi [1 ]
Huang, Tao [3 ]
Han, Qing-Long [4 ]
Zhu, Bing [1 ]
机构
[1] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100191, Peoples R China
[2] Vitalent Consulting, S-41761 Gothenburg, Sweden
[3] James Cook Univ, Coll Sci & Engn, Cairns, Qld 4878, Australia
[4] Swinburne Univ Technol, Sch Sci Comp & Engn Technol, Melbourne, Vic 3122, Australia
来源
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Radar; Radar imaging; Point cloud compression; Radar detection; Feature extraction; Three-dimensional displays; Object detection; 4D imaging radar; radar point cloud; kernel density estimation; multi-dimensional Gaussian mixture; 3D object detection; autonomous driving; MIMO RADAR; NETWORK; CNN;
D O I
10.1109/TIV.2023.3322729
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The 4D millimeter-Wave (mmWave) radar is a promising technology for vehicle sensing due to its cost-effectiveness and operability in adverse weather conditions. However, the adoption of this technology has been hindered by sparsity and noise issues in radar point cloud data. This article introduces spatial multi-representation fusion (SMURF), a novel approach to 3D object detection using a single 4D imaging radar. SMURF leverages multiple representations of radar detection points, including pillarization and density features of a multi-dimensional Gaussian mixture distribution through kernel density estimation (KDE). KDE effectively mitigates measurement inaccuracy caused by limited angular resolution and multi-path propagation of radar signals. Additionally, KDE helps alleviate point cloud sparsity by capturing density features. Experimental evaluations on View-of-Delft (VoD) and TJ4DRadSet datasets demonstrate the effectiveness and generalization ability of SMURF, outperforming recently proposed 4D imaging radar-based single-representation models. Moreover, while using 4D imaging radar only, SMURF still achieves comparable performance to the state-of-the-art 4D imaging radar and camera fusion-based method, with an increase of 1.22% in the mean average precision on bird's-eye view of TJ4DRadSet dataset and 1.32% in the 3D mean average precision on the entire annotated area of VoD dataset. Our proposed method demonstrates impressive inference time and addresses the challenges of real-time detection, with the inference time no more than 0.05 seconds for most scans on both datasets. This research highlights the benefits of 4D mmWave radar and is a strong benchmark for subsequent works regarding 3D object detection with 4D imaging radar. Index Terms-4D imaging radar, radar point cloud,
引用
收藏
页码:799 / 812
页数:14
相关论文
共 50 条
  • [41] Clinical 3D and 4D imaging of the thoracic aorta
    Fleischmann, D.
    Miller, D. C.
    DISEASES OF THE HEART, CHEST & BREAST, 2007, : 119 - +
  • [42] Deep Continuous Fusion for Multi-sensor 3D Object Detection
    Liang, Ming
    Yang, Bin
    Wang, Shenlong
    Urtasun, Raquel
    COMPUTER VISION - ECCV 2018, PT XVI, 2018, 11220 : 663 - 678
  • [43] SparseFusion3D: Sparse Sensor Fusion for 3D Object Detection by Radar and Camera in Environmental Perception
    Yu, Zedong
    Wan, Weibing
    Ren, Maiyu
    Zheng, Xiuyuan
    Fang, Zhijun
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1524 - 1536
  • [44] Point Cloud Painting for 3D Object Detection with Camera and Automotive 3+1D RADAR Fusion
    Montiel-Marin, Santiago
    Llamazares, Angel
    Antunes, Miguel
    Revenga, Pedro A.
    Bergasa, Luis M.
    SENSORS, 2024, 24 (04)
  • [45] 4D iRIOM: 4D Imaging Radar Inertial Odometry and Mapping
    Zhuang, Yuan
    Wang, Binliang
    Huai, Jianzhu
    Li, Miao
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06): : 3246 - 3253
  • [46] ObjectFusion: Multi-modal 3D Object Detection with Object-Centric Fusion
    Cai, Qi
    Pan, Yingwei
    Yao, Ting
    Ngo, Chong-Wah
    Mei, Tao
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18021 - 18030
  • [47] A multilevel fusion network for 3D object detection
    Xia, Chunlong
    Wei, Ping
    Wei, Wenwen
    Zheng, Nanning
    NEUROCOMPUTING, 2021, 437 : 107 - 117
  • [48] Dense Voxel Fusion for 3D Object Detection
    Mahmoud, Anas
    Hu, Jordan S. K.
    Waslander, Steven L.
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 663 - 672
  • [49] PointPainting: Sequential Fusion for 3D Object Detection
    Vora, Sourabh
    Lang, Alex H.
    Helou, Bassam
    Beijbom, Oscar
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4603 - 4611
  • [50] Dense projection fusion for 3D object detection
    Chen, Zhao
    Hu, Bin-Jie
    Luo, Chengxi
    Chen, Guohao
    Zhu, Haohui
    SCIENTIFIC REPORTS, 2024, 14 (01):