SMURF: Spatial Multi-Representation Fusion for 3D Object Detection With 4D Imaging Radar

被引:16
|
作者
Liu, Jianan [2 ]
Zhao, Qiuchi [1 ]
Xiong, Weiyi [1 ]
Huang, Tao [3 ]
Han, Qing-Long [4 ]
Zhu, Bing [1 ]
机构
[1] Beihang Univ, Sch Automat Sci & Elect Engn, Beijing 100191, Peoples R China
[2] Vitalent Consulting, S-41761 Gothenburg, Sweden
[3] James Cook Univ, Coll Sci & Engn, Cairns, Qld 4878, Australia
[4] Swinburne Univ Technol, Sch Sci Comp & Engn Technol, Melbourne, Vic 3122, Australia
来源
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Radar; Radar imaging; Point cloud compression; Radar detection; Feature extraction; Three-dimensional displays; Object detection; 4D imaging radar; radar point cloud; kernel density estimation; multi-dimensional Gaussian mixture; 3D object detection; autonomous driving; MIMO RADAR; NETWORK; CNN;
D O I
10.1109/TIV.2023.3322729
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The 4D millimeter-Wave (mmWave) radar is a promising technology for vehicle sensing due to its cost-effectiveness and operability in adverse weather conditions. However, the adoption of this technology has been hindered by sparsity and noise issues in radar point cloud data. This article introduces spatial multi-representation fusion (SMURF), a novel approach to 3D object detection using a single 4D imaging radar. SMURF leverages multiple representations of radar detection points, including pillarization and density features of a multi-dimensional Gaussian mixture distribution through kernel density estimation (KDE). KDE effectively mitigates measurement inaccuracy caused by limited angular resolution and multi-path propagation of radar signals. Additionally, KDE helps alleviate point cloud sparsity by capturing density features. Experimental evaluations on View-of-Delft (VoD) and TJ4DRadSet datasets demonstrate the effectiveness and generalization ability of SMURF, outperforming recently proposed 4D imaging radar-based single-representation models. Moreover, while using 4D imaging radar only, SMURF still achieves comparable performance to the state-of-the-art 4D imaging radar and camera fusion-based method, with an increase of 1.22% in the mean average precision on bird's-eye view of TJ4DRadSet dataset and 1.32% in the 3D mean average precision on the entire annotated area of VoD dataset. Our proposed method demonstrates impressive inference time and addresses the challenges of real-time detection, with the inference time no more than 0.05 seconds for most scans on both datasets. This research highlights the benefits of 4D mmWave radar and is a strong benchmark for subsequent works regarding 3D object detection with 4D imaging radar. Index Terms-4D imaging radar, radar point cloud,
引用
收藏
页码:799 / 812
页数:14
相关论文
共 50 条
  • [1] SMURF: Spatial Multi-Representation Fusion for 3D Object Detection with 4D Imaging Radar
    Liu, Jianan
    Zhao, Qiuchi
    Xiong, Weiyi
    Huang, Tao
    Han, Qing-Long
    Zhu, Bing
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 3141 - 3141
  • [2] LXL: LiDAR Excluded Lean 3D Object Detection With 4D Imaging Radar and Camera Fusion
    Xiong, Weiyi
    Liu, Jianan
    Huang, Tao
    Han, Qing-Long
    Xia, Yuxuan
    Zhu, Bing
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 79 - 92
  • [3] LXL: LiDAR Excluded Lean 3D Object Detection with 4D Imaging Radar and Camera Fusion
    Xiong, Weiyi
    Liu, Jianan
    Huang, Tao
    Han, Qing-Long
    Xia, Yuxuan
    Zhu, Bing
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 3142 - 3142
  • [4] SMIFormer: Learning Spatial Feature Representation for 3D Object Detection from 4D Imaging Radar via Multi-View Interactive Transformers
    Shi, Weigang
    Zhu, Ziming
    Zhang, Kezhi
    Chen, Huanlei
    Yu, Zhuoping
    Zhu, Yu
    SENSORS, 2023, 23 (23)
  • [5] Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving
    Wang, Li
    Zhang, Xinyu
    Li, Jun
    Xv, Baowei
    Fu, Rong
    Chen, Haifeng
    Yang, Lei
    Jin, Dafeng
    Zhao, Lijun
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (05) : 5628 - 5641
  • [6] InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection
    Wang, Li
    Zhang, Xinyu
    Xv, Baowei
    Zhang, Jinzhao
    Fu, Rong
    Wang, Xiaoyu
    Zhu, Lei
    Ren, Haibing
    Lu, Pingping
    Li, Jun
    Liu, Huaping
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 12247 - 12253
  • [7] SGDet3D: Semantics and Geometry Fusion for 3D Object Detection Using 4D Radar and Camera
    Bai, Xiaokai
    Yu, Zhu
    Zheng, Lianqing
    Zhang, Xiaohan
    Zhou, Zili
    Zhang, Xue
    Wang, Fang
    Bai, Jie
    Shen, Hui-Liang
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (01): : 828 - 835
  • [8] Towards Robust 3D Object Detection with LiDAR and 4D Radar Fusion in Various Weather Conditions
    Chae, Yujeong
    Kim, Hyeonseong
    Yoon, Kuk-Jin
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 15162 - 15172
  • [9] MAFF-Net: Enhancing 3D Object Detection With 4D Radar via Multi-Assist Feature Fusion
    Bi, Xin
    Weng, Caien
    Tong, Panpan
    Fan, Baojie
    Eichberge, Arno
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (05): : 4284 - 4291
  • [10] MSSA: Multi-Representation Semantics-Augmented Set Abstraction for 3D Object Detection
    Liu, Huaijin
    Du, Jixiang
    Zhang, Yong
    Zhang, Hongbo
    Zeng, Jiandian
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (10)