Multi-Frequency RF Sensor Data Adaptation for Motion Recognition with Multi-Modal Deep Learning

被引:8
|
作者
Rahman, M. Mahbubur [1 ]
Gurbuz, Sevgi Z. [1 ]
机构
[1] Univ Alabama, Dept Elect & Comp Engn, Tuscaloosa, AL 35487 USA
基金
美国国家科学基金会;
关键词
micro-Doppler; radar; multi-modal learning; adversarial neural networks; CLASSIFICATION;
D O I
10.1109/RadarConf2147009.2021.9455204
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The widespread availability of low-cost RF sensors has made it easier to construct RF sensor networks for motion recognition, as well as increased the availability of RF data across a variety of frequencies, waveforms, and transmit parameters. However, it is not effective to directly use disparate RF sensor data for the training of deep neural networks, as the phenomenological differences in the data result in significant performance degradation. In this paper, we consider two approaches for the exploitation of multi-frequency RF data: 1) a single sensor case, where adversarial domain adaptation is used to transform the data from one RF sensor to resemble that of another, and 2) a multi-sensor case, where a multi-modal neural network is designed for joint target recognition using measurements from all sensors. Our results show that the developed approaches offer effective techniques for leveraging multi-frequency RF sensor data for target recognition.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Efficient Data Collection Scheme for Multi-Modal Underwater Sensor Networks Based on Deep Reinforcement Learning
    Song, Shanshan
    Liu, Jun
    Guo, Jiani
    Lin, Bin
    Ye, Qiang
    Cui, Junhong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (05) : 6558 - 6570
  • [42] Multi-Modal Multi-Instance Learning for Retinal Disease Recognition
    Li, Xirong
    Zhou, Yang
    Wang, Jie
    Lin, Hailan
    Zhao, Jianchun
    Ding, Dayong
    Yu, Weihong
    Chen, Youxin
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2474 - 2482
  • [43] Learning to Hash on Partial Multi-Modal Data
    Wang, Qifan
    Si, Luo
    Shen, Bin
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3904 - 3910
  • [44] Multi-Modal Human Action Recognition Using Deep Neural Networks Fusing Image and Inertial Sensor Data
    Hwang, Inhwan
    Cha, Geonho
    Oh, Songhwai
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2017, : 278 - 283
  • [45] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2ND IEEE INTERNATIONAL CONFERENCE ON ADVANCED COMPUTER CONTROL (ICACC 2010), VOL. 5, 2010, : 612 - 616
  • [46] Multi-Modal Face Recognition
    Shen, Haihong
    Ma, Liqun
    Zhang, Qishan
    2010 8TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2010, : 720 - 723
  • [47] Multi-modal Active Learning From Human Data: A Deep Reinforcement Learning Approach
    Rudovic, Ognjen
    Zhang, Meiru
    Schuller, Bjorn
    Picard, Rosalind W.
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 6 - 15
  • [48] Multi-modal human motion recognition based on behaviour tree
    Yang, Qin
    Zhou, Zhenhua
    INTERNATIONAL JOURNAL OF BIOMETRICS, 2024, 16 (3-4) : 381 - 398
  • [49] Multi-Modal Song Mood Detection with Deep Learning
    Pyrovolakis, Konstantinos
    Tzouveli, Paraskevi
    Stamou, Giorgos
    SENSORS, 2022, 22 (03)
  • [50] Memory based fusion for multi-modal deep learning
    Priyasad, Darshana
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    INFORMATION FUSION, 2021, 67 : 136 - 146