Multi-Modality Sensing and Data Fusion for Multi-Vehicle Detection

被引:30
|
作者
Roy, Debashri [1 ]
Li, Yuanyuan [1 ]
Jian, Tong [1 ]
Tian, Peng [1 ]
Chowdhury, Kaushik [1 ]
Ioannidis, Stratis [1 ]
机构
[1] Northeastern Univ, Dept Elect, Comp Engn, Boston, MA 02115 USA
基金
美国国家科学基金会;
关键词
Vehicle detection; tracking; multimodal data; fusion; latent embeddings; image; seismic; acoustic; radar; CHALLENGES; TRACKING;
D O I
10.1109/TMM.2022.3145663
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the recent surge in autonomous driving vehicles, the need for accurate vehicle detection and tracking is critical now more than ever. Detecting vehicles from visual sensors fails in non-line-of-sight (NLOS) settings. This can be compensated by the inclusion of other modalities in a multi-domain sensing environment. We propose several deep learning based frameworks for fusing different modalities (image, radar, acoustic, seismic) through the exploitation of complementary latent embeddings, incorporating multiple state-of-the-art fusion strategies. Our proposed fusion frameworks considerably outperform unimodal detection. Moreover, fusion between image and non-image modalities improves vehicle tracking and detection under NLOS conditions. We validate our models on the real-world multimodal ESCAPE dataset, showing 33.16% improvement in vehicle detection by fusion (over visual inference alone) over test scenarios with 30-42% NLOS conditions. To demonstrate how well our framework generalizes, we also validate our models on the multimodal NuScene dataset, showing similar to 22% improvement over competing methods.
引用
收藏
页码:2280 / 2295
页数:16
相关论文
共 50 条
  • [41] Multi-modality fusion learning for the automatic diagnosis of optic neuropathy
    Cao, Zheng
    Sun, Chuanbin
    Wang, Wenzhe
    Zheng, Xiangshang
    Wu, Jian
    Gao, Honghao
    PATTERN RECOGNITION LETTERS, 2021, 142 : 58 - 64
  • [42] Multi-modality fusion of floor and ambulatory sensors for gait classification
    Yunas, Syed Usama
    Alharthi, Abdullah
    Ozanyan, Krikor B.
    2019 IEEE 28TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2019, : 1467 - 1472
  • [43] Diffusion-driven multi-modality medical image fusion
    Qu, Jiantao
    Huang, Dongjin
    Shi, Yongsheng
    Liu, Jinhua
    Tang, Wen
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2025,
  • [44] BEV-Guided Multi-Modality Fusion for Driving Perception
    Man, Yunze
    Gui, Liang-Yan
    Wang, Yu-Xiong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 21960 - 21969
  • [45] Concept-Driven Multi-Modality Fusion for Video Search
    Wei, Xiao-Yong
    Jiang, Yu-Gang
    Ngo, Chong-Wah
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2011, 21 (01) : 62 - 73
  • [46] Multi-Modality Image Fusion Using the Nonsubsampled Contourlet Transform
    Liu, Cuiyin
    Chen, Shu-qing
    Fu, Qiao
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2013, E96D (10): : 2215 - 2223
  • [47] Multi-modality gaze-contingent displays for image fusion
    Nikolov, SG
    Bull, DR
    Canagarajah, CN
    Jones, MG
    Gilchrist, ID
    PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, VOL II, 2002, : 1213 - 1220
  • [48] The application of wavelet transform to multi-modality medical image fusion
    Wang, Anna
    Sun, Haijing
    Guan, Yueyang
    PROCEEDINGS OF THE 2006 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL, 2006, : 270 - 274
  • [49] Detection of Multi-Modality in Self-Mixing Interferometry
    Usman, Muhammad
    Zabit, Usman
    Alam, Syed Asad
    PROCEEDINGS OF 2020 17TH INTERNATIONAL BHURBAN CONFERENCE ON APPLIED SCIENCES AND TECHNOLOGY (IBCAST), 2020, : 260 - 263
  • [50] Multi-modality Sensor Data Classification with Selective Attention
    Zhang, Xiang
    Yao, Lina
    Huang, Chaoran
    Wang, Sen
    Tan, Mingkui
    Long, Guodong
    Wang, Can
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 3111 - 3117