Multi-Frequency RF Sensor Data Adaptation for Motion Recognition with Multi-Modal Deep Learning

被引:8
|
作者
Rahman, M. Mahbubur [1 ]
Gurbuz, Sevgi Z. [1 ]
机构
[1] Univ Alabama, Dept Elect & Comp Engn, Tuscaloosa, AL 35487 USA
基金
美国国家科学基金会;
关键词
micro-Doppler; radar; multi-modal learning; adversarial neural networks; CLASSIFICATION;
D O I
10.1109/RadarConf2147009.2021.9455204
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The widespread availability of low-cost RF sensors has made it easier to construct RF sensor networks for motion recognition, as well as increased the availability of RF data across a variety of frequencies, waveforms, and transmit parameters. However, it is not effective to directly use disparate RF sensor data for the training of deep neural networks, as the phenomenological differences in the data result in significant performance degradation. In this paper, we consider two approaches for the exploitation of multi-frequency RF data: 1) a single sensor case, where adversarial domain adaptation is used to transform the data from one RF sensor to resemble that of another, and 2) a multi-sensor case, where a multi-modal neural network is designed for joint target recognition using measurements from all sensors. Our results show that the developed approaches offer effective techniques for leveraging multi-frequency RF sensor data for target recognition.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Multi-modal deep learning for landform recognition
    Du, Lin
    You, Xiong
    Li, Ke
    Meng, Liqiu
    Cheng, Gong
    Xiong, Liyang
    Wang, Guangxia
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 158 : 63 - 75
  • [2] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [3] A Multi-Modal Deep Learning Approach for Emotion Recognition
    Shahzad, H. M.
    Bhatti, Sohail Masood
    Jaffar, Arfan
    Rashid, Muhammad
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 36 (02): : 1561 - 1570
  • [4] A weakly-supervised deep domain adaptation method for multi-modal sensor data
    Mihailescu, Radu-Casian
    2021 IEEE GLOBAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INTERNET OF THINGS (GCAIOT), 2021, : 45 - 50
  • [5] Multi-Modal Deep Learning for Vehicle Sensor Data Abstraction and Attack Detection
    Rofail, Mark
    Alsafty, Aysha
    Matousek, Matthias
    Kargl, Frank
    2019 IEEE INTERNATIONAL CONFERENCE OF VEHICULAR ELECTRONICS AND SAFETY (ICVES 19), 2019,
  • [6] Deep learning approaches for multi-modal sensor data analysis and abnormality detection
    Jadhav, Santosh Pandurang
    Srinivas, Angalkuditi
    Dipak Raghunath, Patil
    Ramkumar Prabhu, M.
    Suryawanshi, Jaya
    Haldorai, Anandakumar
    Measurement: Sensors, 33
  • [7] Multi-Modal Beam Selection: A Transfer Methodology for Multi-Frequency
    Wang, Dong
    Wang, Huiyang
    Tu, Mei
    Zhang, Fan
    Gao, Xiangfeng
    Wang, Zhigang
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6322 - 6327
  • [8] STARS: Soft Multi-Task Learning for Activity Recognition from Multi-Modal Sensor Data
    Liu, Xi
    Tan, Pang-Ning
    Liu, Lei
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2018, PT II, 2018, 10938 : 569 - 581
  • [9] Classifying Imbalanced Multi-modal Sensor Data for Human Activity Recognition in a Smart Home using Deep Learning
    Alani, Ali A.
    Cosma, Georgina
    Taherkhani, Aboozar
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [10] InstaIndoor and multi-modal deep learning for indoor scene recognition
    Glavan, Andreea
    Talavera, Estefania
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (09): : 6861 - 6877