SMURF: Spatial Multi-Representation Fusion for 3D Object Detection With 4D Imaging Radar

被引:16
|
作者
Liu, Jianan [2 ]
Zhao, Qiuchi [1 ]
Xiong, Weiyi [1 ]
Huang, Tao [3 ]
Han, Qing-Long [4 ]
Zhu, Bing [1 ]
机构
[1] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100191, Peoples R China
[2] Vitalent Consulting, S-41761 Gothenburg, Sweden
[3] James Cook Univ, Coll Sci & Engn, Cairns, Qld 4878, Australia
[4] Swinburne Univ Technol, Sch Sci Comp & Engn Technol, Melbourne, Vic 3122, Australia
来源
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Radar; Radar imaging; Point cloud compression; Radar detection; Feature extraction; Three-dimensional displays; Object detection; 4D imaging radar; radar point cloud; kernel density estimation; multi-dimensional Gaussian mixture; 3D object detection; autonomous driving; MIMO RADAR; NETWORK; CNN;
D O I
10.1109/TIV.2023.3322729
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The 4D millimeter-Wave (mmWave) radar is a promising technology for vehicle sensing due to its cost-effectiveness and operability in adverse weather conditions. However, the adoption of this technology has been hindered by sparsity and noise issues in radar point cloud data. This article introduces spatial multi-representation fusion (SMURF), a novel approach to 3D object detection using a single 4D imaging radar. SMURF leverages multiple representations of radar detection points, including pillarization and density features of a multi-dimensional Gaussian mixture distribution through kernel density estimation (KDE). KDE effectively mitigates measurement inaccuracy caused by limited angular resolution and multi-path propagation of radar signals. Additionally, KDE helps alleviate point cloud sparsity by capturing density features. Experimental evaluations on View-of-Delft (VoD) and TJ4DRadSet datasets demonstrate the effectiveness and generalization ability of SMURF, outperforming recently proposed 4D imaging radar-based single-representation models. Moreover, while using 4D imaging radar only, SMURF still achieves comparable performance to the state-of-the-art 4D imaging radar and camera fusion-based method, with an increase of 1.22% in the mean average precision on bird's-eye view of TJ4DRadSet dataset and 1.32% in the 3D mean average precision on the entire annotated area of VoD dataset. Our proposed method demonstrates impressive inference time and addresses the challenges of real-time detection, with the inference time no more than 0.05 seconds for most scans on both datasets. This research highlights the benefits of 4D mmWave radar and is a strong benchmark for subsequent works regarding 3D object detection with 4D imaging radar. Index Terms-4D imaging radar, radar point cloud,
引用
收藏
页码:799 / 812
页数:14
相关论文
共 50 条
  • [21] RCM-Fusion: Radar-Camera Multi-Level Fusion for 3D Object Detection
    Kim, Jisong
    Seong, Minjae
    Bang, Geonho
    Kum, Dongsuk
    Choi, Jun Won
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 18236 - 18242
  • [22] PillarDAN: Pillar-based Dual Attention Attention Network for 3D Object Detection with 4D RaDAR
    Li, Jingzhong
    Yang, Lin
    Chen, Yuxuan
    Yang, Yixin
    Jin, Yue
    Akiyama, Kuanta
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 1851 - 1857
  • [23] Radar-camera fusion for 3D object detection with aggregation transformer
    Li, Jun
    Zhang, Han
    Wu, Zizhang
    Xu, Tianhao
    APPLIED INTELLIGENCE, 2024, 54 (21) : 10627 - 10639
  • [24] Camera–Radar Fusion with Modality Interaction and Radar Gaussian Expansion for 3D Object Detection
    Liu X.
    Li Z.
    Zhou Y.
    Peng Y.
    Luo J.
    Cyborg and Bionic Systems, 2024, 5
  • [25] Multi-feature Fusion VoteNet for 3D Object Detection
    Wang, Zhoutao
    Xie, Qian
    Wei, Mingqiang
    Long, Kun
    Wang, Jun
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (01)
  • [26] LiDAR-Based All-Weather 3D Object Detection via Prompting and Distilling 4D Radar
    Chae, Yujeong
    Kim, Hyeonseong
    Oh, Changgyoon
    Kim, Minseok
    Yoon, Kuk-Jin
    COMPUTER VISION - ECCV 2024, PT LVI, 2025, 15114 : 368 - 385
  • [27] MVFAN: Multi-view Feature Assisted Network for 4D Radar Object Detection
    Yan, Qiao
    Wang, Yihan
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT IV, 2024, 14450 : 493 - 511
  • [28] MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and Camera Fusion
    Wu, Zizhang
    Chen, Guilian
    Gan, Yuanzhu
    Wang, Lei
    Pu, Jian
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 2766 - 2773
  • [29] Radar Transformer: An Object Classification Network Based on 4D MMW Imaging Radar
    Bai, Jie
    Zheng, Lianqing
    Li, Sen
    Tan, Bin
    Chen, Sihan
    Huang, Libo
    SENSORS, 2021, 21 (11)
  • [30] 3D Multi-Object Tracking Based on Radar-Camera Fusion
    Lin, Zihao
    Hu, Jianming
    2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 2502 - 2507