Multi-Frequency RF Sensor Data Adaptation for Motion Recognition with Multi-Modal Deep Learning

被引:8
|
作者
Rahman, M. Mahbubur [1 ]
Gurbuz, Sevgi Z. [1 ]
机构
[1] Univ Alabama, Dept Elect & Comp Engn, Tuscaloosa, AL 35487 USA
基金
美国国家科学基金会;
关键词
micro-Doppler; radar; multi-modal learning; adversarial neural networks; CLASSIFICATION;
D O I
10.1109/RadarConf2147009.2021.9455204
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The widespread availability of low-cost RF sensors has made it easier to construct RF sensor networks for motion recognition, as well as increased the availability of RF data across a variety of frequencies, waveforms, and transmit parameters. However, it is not effective to directly use disparate RF sensor data for the training of deep neural networks, as the phenomenological differences in the data result in significant performance degradation. In this paper, we consider two approaches for the exploitation of multi-frequency RF data: 1) a single sensor case, where adversarial domain adaptation is used to transform the data from one RF sensor to resemble that of another, and 2) a multi-sensor case, where a multi-modal neural network is designed for joint target recognition using measurements from all sensors. Our results show that the developed approaches offer effective techniques for leveraging multi-frequency RF sensor data for target recognition.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Adaptive Automatic Object Recognition in Single and Multi-Modal Sensor Data
    Khuon, Timothy
    Rand, Robert
    Truslow, Eric
    2014 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR), 2014,
  • [22] Recognition of Sleep Dependent Memory Consolidation with Multi-modal Sensor Data
    Sano, Akane
    Picard, Rosalind W.
    2013 IEEE INTERNATIONAL CONFERENCE ON BODY SENSOR NETWORKS (BSN), 2013,
  • [23] Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion
    Younis, Eman M. G.
    Zaki, Someya Mohsen
    Kanjo, Eiman
    Houssein, Essam H.
    SENSORS, 2022, 22 (15)
  • [24] Multi-modal broad learning for material recognition
    Wang, Zhaoxin
    Liu, Huaping
    Xu, Xinying
    Sun, Fuchun
    COGNITIVE COMPUTATION AND SYSTEMS, 2021, 3 (02) : 123 - 130
  • [25] Deep Multi-lnstance Learning Using Multi-Modal Data for Diagnosis for Lymphocytosis
    Sahasrabudhe, Mihir
    Sujobert, Pierre
    Zacharaki, Evangelia, I
    Maurin, Eugenie
    Grange, Beatrice
    Jallades, Laurent
    Paragios, Nikos
    Vakalopoulou, Maria
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (06) : 2125 - 2136
  • [26] Unequal adaptive visual recognition by learning from multi-modal data
    Cai, Ziyun
    Zhang, Tengfei
    Jing, Xiao-Yuan
    Shao, Ling
    INFORMATION SCIENCES, 2022, 600 : 1 - 21
  • [27] Deep Multi-modal Learning with Cascade Consensus
    Yang, Yang
    Wu, Yi-Feng
    Zhan, De-Chuan
    Jiang, Yuan
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 64 - 72
  • [28] Multi-modal deep distance metric learning
    Roostaiyan, Seyed Mahdi
    Imani, Ehsan
    Baghshah, Mahdieh Soleymani
    INTELLIGENT DATA ANALYSIS, 2017, 21 (06) : 1351 - 1369
  • [29] Multi-modal data clustering using deep learning: A systematic review
    Raya, Sura
    Orabi, Mariam
    Afyouni, Imad
    Al Aghbari, Zaher
    NEUROCOMPUTING, 2024, 607
  • [30] Deep Object Tracking with Multi-modal Data
    Zhang, Xuezhi
    Yuan, Yuan
    Lu, Xiaoqiang
    2016 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS (CITS), 2016, : 161 - 165