Robust Multi-Modal Sensor Fusion: An Adversarial Approach

被引:4
|
作者
Roheda, Siddharth [1 ]
Krim, Hamid [1 ]
Riggan, Benjamin S. [2 ]
机构
[1] North Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC 27695 USA
[2] Univ Nebraska Lincoln, Dept Elect & Comp Engn, Lincoln, NE 68588 USA
关键词
Sensor fusion; Sensor phenomena and characterization; Generators; Sensor systems; Generative adversarial networks; Feature extraction; Multi-modal sensors; target detection; Generative Adversarial Networks (GAN); Event Driven Fusion;
D O I
10.1109/JSEN.2020.3018698
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent years, multi-modal fusion has attracted a lot of research interest, both in academia, and in industry. Multimodal fusion entails the combination of information from a set of different types of sensors. Exploiting complementary information from different sensors, we show that target detection and classification problems can greatly benefit from this fusion approach and result in a performance increase. To achieve this gain, the information fusion from various sensors is shown to require some principled strategy to ensure that additional information is constructively used, and has a positive impact on performance. We subsequently demonstrate the viability of the proposed fusion approach by weakening the strong dependence on the functionality of all sensors, hence introducing additional flexibility in our solution and lifting the severe limitation in unconstrained surveillance settings with potential environmental impact. Our proposed data driven approach to multimodal fusion, exploits selected optimal features from an estimated latent space of data across all modalities. This hidden space is learned via a generative network conditioned on individual sensor modalities. The hidden space, as an intrinsic structure, is then exploited in detecting damaged sensors, and in subsequently safeguarding the performance of the fused sensor system. Experimental results show that such an approach can achieve automatic system robustness against noisy/damaged sensors.
引用
收藏
页码:1885 / 1896
页数:12
相关论文
共 50 条
  • [41] Localization in multi-modal sensor networks
    Farrell, Ryan
    Garcia, Roberto
    Lucarelli, Dennis
    Terzis, Andreas
    Wang, I-Jeng
    [J]. PROCEEDINGS OF THE 2007 INTERNATIONAL CONFERENCE ON INTELLIGENT SENSORS, SENSOR NETWORKS AND INFORMATION PROCESSING, 2007, : 37 - +
  • [42] Multi-Modal Medical Image Fusion Using Transfer Learning Approach
    Kalamkar, Shrida
    Mary, Geetha A.
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (12) : 483 - 488
  • [43] Interpretable Passive Multi-Modal Sensor Fusion for Human Identification and Activity Recognition
    Yuan, Liangqi
    Andrews, Jack
    Mu, Huaizheng
    Vakil, Asad
    Ewing, Robert
    Blasch, Erik
    Li, Jia
    [J]. SENSORS, 2022, 22 (15)
  • [44] Environment-dependent depth enhancement with multi-modal sensor fusion learning
    Takami, Kuya
    Lee, Taeyoung
    [J]. 2018 SECOND IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC), 2018, : 232 - 237
  • [45] Learning Efficient and Robust Multi-Modal Quadruped Locomotion: A Hierarchical Approach
    Xu, Shaohang
    Zhu, Lijun
    Ho, Chin Pang
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 4649 - 4655
  • [46] Multi-modal mobile sensor data fusion for autonomous robot mapping problem
    Kassem, M. H.
    Shehata, Omar M.
    Morgan, E. I. Imam
    [J]. 2015 3RD INTERNATIONAL CONFERENCE ON CONTROL, MECHATRONICS AND AUTOMATION (ICCMA 2015), 2016, 42
  • [47] Multi-modal Sensor Fusion for Learning Rich Models for Interacting Soft Robots
    Thuruthel, Thomas George
    Iida, Fumiya
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON SOFT ROBOTICS, ROBOSOFT, 2023,
  • [48] AWLC: Adaptive Weighted Loop Closure for SLAM with Multi-Modal Sensor Fusion
    Zhou, Guangli
    Huang, Fei
    Liu, Wenbing
    Zhang, Yuxuan
    Wei, Hanbing
    Hou, Xiaoqin
    [J]. JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2024, 33 (13)
  • [49] Multi-modal sensor fusion for highly accurate vehicle motion state estimation
    Marco, Vicent Rodrigo
    Kalkkuhl, Jens
    Raisch, Joerg
    Scholte, Wouter J.
    Nijmeijer, Henk
    Seel, Thomas
    [J]. CONTROL ENGINEERING PRACTICE, 2020, 100
  • [50] Probabilistic multi-modal depth estimation based on camera–LiDAR sensor fusion
    Johan S. Obando-Ceron
    Victor Romero-Cano
    Sildomar Monteiro
    [J]. Machine Vision and Applications, 2023, 34