Scene-aware Learning Network for Radar Object Detection

被引:6
|
作者
Zheng, Zangwei [1 ]
Yue, Xiangyu [2 ]
Keutzer, Kurt [2 ]
Vincentelli, Alberto Sangiovanni [2 ]
机构
[1] Nanjing Univ, Nanjing, Peoples R China
[2] Univ Calif Berkeley, Berkeley, CA USA
关键词
Auto-driving; Radar Frequency Data; Object Detection; Neural Network; Data Augmentation;
D O I
10.1145/3460426.3463655
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Object detection is essential to safe autonomous or assisted driving. Previous works usually utilize RGB images or LiDAR point clouds to identify and localize multiple objects in self-driving. However, cameras tend to fail in bad driving conditions, e.g. bad weather or weak lighting, while LiDAR scanners are too expensive to get widely deployed in commercial applications. Radar has been drawing more and more attention due to its robustness and low cost. In this paper, we propose a scene-aware radar learning framework for accurate and robust object detection. First, the learning framework contains branches conditioning on the scene category of the radar sequence; with each branch optimized for a specific type of scene. Second, three different 3D autoencoder-based architectures are proposed for radar object detection and ensemble learning is performed over the different architectures to further boost the final performance. Third, we propose novel scene-aware sequence mix augmentation (SceneMix) and scene-specific post-processing to generate more robust detection results. In the ROD2021 Challenge, we achieved a final result of average precision of 75.0% and an average recall of 81.0%. Moreover, in the parking lot scene, our framework ranks first with an average precision of 97.8% and an average recall of 98.6%, which demonstrates the effectiveness of our framework.
引用
收藏
页码:573 / 579
页数:7
相关论文
共 50 条
  • [21] Adaptive scene-aware deep attention network for remote sensing image compression
    Zhai, Guowei
    Liu, Gang
    He, Xiaohai
    Wang, Zhengyong
    Ren, Chao
    Chen, Zhengxin
    JOURNAL OF ELECTRONIC IMAGING, 2021, 30 (05)
  • [22] Collaborative Noisy Label Cleaner: Learning Scene-aware Trailers for Multi-modal Highlight Detection in Movies
    Gan, Bei
    Shu, Xiujun
    Qiao, Ruizhi
    Wu, Haoqian
    Chen, Keyu
    Li, Hanjun
    Ren, Bo
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18898 - 18907
  • [23] SCENE-AWARE VIDEO STABILIZATION BY VISUAL FIXATION
    Kurz, Christian
    Thormaehlen, Thorsten
    Seidel, Hans-Peter
    2009 CONFERENCE FOR VISUAL MEDIA PRODUCTION: CVMP 2009, 2009, : 1 - 6
  • [24] SCENE-AWARE HIGH DYNAMIC RANGE IMAGING
    Chen, Wei-Ren
    Lee, Chuan-Ren
    Chiang, Jui-Chiu
    2015 23RD EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2015, : 609 - 613
  • [25] Embodied Scene-aware Human Pose Estimation
    Luo, Zhengyi
    Iwase, Shun
    Yuan, Ye
    Kitani, Kris
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [26] Revisiting audio visual scene-aware dialog
    Liu, Aishan
    Xie, Huiyuan
    Liu, Xianglong
    Yin, Zixin
    Liu, Shunchang
    NEUROCOMPUTING, 2022, 496 : 227 - 237
  • [27] Scene-Aware Label Graph Learning for Multi-Label Image Classification
    Zhu, Xuelin
    Liu, Jian
    Liu, Weijia
    Ge, Jiawei
    Liu, Bo
    Cao, Jiuxin
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1473 - 1482
  • [28] PortLaneNet: A Scene-Aware Model for Robust Lane Detection in Container Terminal Environments
    Ye, Haixiong
    Kang, Zhichao
    Zhou, Yue
    Zhang, Chenhe
    Wang, Wei
    Zhang, Xiliang
    WORLD ELECTRIC VEHICLE JOURNAL, 2024, 15 (05):
  • [29] SACANet: scene-aware class attention network for semantic segmentation of remote sensing images
    Ma, Xiaowen
    Che, Rui
    Hong, Tingfeng
    Ma, Mengting
    Zhao, Ziyan
    Feng, Tian
    Zhang, Wei
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 828 - 833
  • [30] Scene-aware Sound Rendering in Virtual and Real Worlds
    Tang, Zhenyu
    Manocha, Dinesh
    2020 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES WORKSHOPS (VRW 2020), 2020, : 535 - 536