Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection

被引:0
|
作者
Huang, Linyan [1 ]
Li, Zhiqi [2 ]
Sima, Chonghao [1 ]
Wang, Wenhai [3 ]
Wang, Jingdong [4 ]
Qiao, Yu [1 ]
Li, Hongyang [1 ]
机构
[1] Shanghai AI Lab, Shanghai, Peoples R China
[2] Nanjing Univ, Nanjing, Peoples R China
[3] CUHK, Hong Kong, Peoples R China
[4] Baidu, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current research is primarily dedicated to advancing the accuracy of camera-only 3D object detectors (apprentice) through the knowledge transferred from LiDARor multi-modal-based counterparts (expert). However, the presence of the domain gap between LiDAR and camera features, coupled with the inherent incompatibility in temporal fusion, significantly hinders the effectiveness of distillation-based enhancements for apprentices. Motivated by the success of uni-modal distillation, an apprentice-friendly expert model would predominantly rely on camera features, while still achieving comparable performance to multi-modal models. To this end, we introduce VCD, a framework to improve the camera-only apprentice model, including an apprentice-friendly multi-modal expert and temporal-fusion-friendly distillation supervision. The multi-modal expert VCD-E adopts an identical structure as that of the camera-only apprentice in order to alleviate the feature disparity, and leverages LiDAR input as a depth prior to reconstruct the 3D scene, achieving the performance on par with other heterogeneous multi-modal experts. Additionally, a fine-grained trajectory-based distillation module is introduced with the purpose of individually rectifying the motion misalignment for each object in the scene. With those improvements, our camera-only apprentice VCD-A sets new state-of-the-art on nuScenes with a score of 63.1% NDS. The code will be released at https://github.com/OpenDriveLab/Birds-eye-view-Perception.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Multi-modal information fusion for LiDAR-based 3D object detection framework
    Ruixin Ma
    Yong Yin
    Jing Chen
    Rihao Chang
    Multimedia Tools and Applications, 2024, 83 : 7995 - 8012
  • [32] Dual-domain deformable feature fusion for multi-modal 3D object detection
    Wang, Shihao
    Deng, Tao
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (06)
  • [33] LSSAttn: Towards Dense and Accurate View Transformation for Multi-modal 3D Object Detection
    Jiang, Qi
    Sun, Hao
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 6600 - 6606
  • [34] CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object Detection
    Zhang, Yanan
    Chen, Jiaxin
    Huang, Di
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 898 - 907
  • [35] Using multi-modal 3D contours and their relations for vision and robotics
    Baseski, Emre
    Pugeault, Nicolas
    Kalkan, Sinan
    Bodenhagen, Leon
    Piater, Justus H.
    Kruger, Norbert
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2010, 21 (08) : 850 - 864
  • [36] MENet: Multi-Modal Mapping Enhancement Network for 3D Object Detection in Autonomous Driving
    Liu, Moyun
    Chen, Youping
    Xie, Jingming
    Zhu, Yijie
    Zhang, Yang
    Yao, Lei
    Bing, Zhenshan
    Zhuang, Genghang
    Huang, Kai
    Zhou, Joey Tianyi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) : 9397 - 9410
  • [37] DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection
    Li, Yingwei
    Yu, Adams Wei
    Meng, Tianjian
    Caine, Ben
    Ngiam, Jiquan
    Peng, Daiyi
    Shen, Junyang
    Lu, Yifeng
    Zhou, Denny
    Le, Quoc, V
    Yuille, Alan
    Tan, Mingxing
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 17161 - 17170
  • [38] Towards efficient multi-modal 3D object detection: Homogeneous sparse fuse network
    Tang, Yingjuan
    He, Hongwen
    Wang, Yong
    Wu, Jingda
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 256
  • [39] MENet: Multi-Modal Mapping Enhancement Network for 3D Object Detection in Autonomous Driving
    Liu, Moyun
    Chen, Youping
    Xie, Jingming
    Zhu, Yijie
    Zhang, Yang
    Yao, Lei
    Bing, Zhenshan
    Zhuang, Genghang
    Huang, Kai
    Zhou, Joey Tianyi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) : 9397 - 9410
  • [40] GraphBEV: Towards Robust BEV Feature Alignment for Multi-modal 3D Object Detection
    Song, Ziying
    Yang, Lei
    Xu, Shaoqing
    Liu, Lin
    Xu, Dongyang
    Jia, Caiyan
    Jia, Feiyang
    Wang, Li
    COMPUTER VISION - ECCV 2024, PT XXVI, 2025, 15084 : 347 - 366