Midas: Generating mmWave Radar Data from Videos for Training Pervasive and Privacy-preserving Human Sensing Tasks

被引:10
|
作者
Deng, Kaikai [1 ]
Zhao, Dong [1 ]
Han, Qiaoyue [1 ]
Zhang, Zihan [1 ]
Wang, Shuyue [1 ]
Zhou, Anfu [1 ]
Ma, Huadong [1 ]
机构
[1] Beijing Univ Posts & Telecommun, State Key Lab Network & Switching Technol, Beijing 100876, Peoples R China
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2023年 / 7卷 / 01期
基金
中国国家自然科学基金;
关键词
human activity recognition; radar sensing; data generation; cross domain translation; RECOGNITION;
D O I
10.1145/3580872
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Millimeter wave radar is a promising sensing modality for enabling pervasive and privacy-preserving human sensing. However, the lack of large-scale radar datasets limits the potential of training deep learning models to achieve generalization and robustness. To close this gap, we resort to designing a software pipeline that leverages wealthy video repositories to generate synthetic radar data, but it confronts key challenges including i) multipath reflection and attenuation of radar signals among multiple humans, ii) unconvertible generated data leading to poor generality for various applications, and iii) the class-imbalance issue of videos leading to low model stability. To this end, we design Midas to generate realistic, convertible radar data from videos via two components: (i) a data generation network (DG-Net) combines several key modules, depth prediction, human mesh fitting and multi-human reflection model, to simulate the multipath reflection and attenuation of radar signals to output convertible coarse radar data, followed by a Transformer model to generate realistic radar data; (ii) a variant Siamese network (VS-Net) selects key video clips to eliminate data redundancy for addressing the class-imbalance issue. We implement and evaluate Midas with video data from various external data sources and real-world radar data, demonstrating its great advantages over the state-of-the-art approach for both activity recognition and object detection tasks.
引用
收藏
页数:26
相关论文
共 5 条
  • [1] Midas plus plus : Generating Training Data of mmWave Radars From Videos for Privacy-Preserving Human Sensing With Mobility
    Deng, Kaikai
    Zhao, Dong
    Zhang, Zihan
    Wang, Shuyue
    Zheng, Wenxin
    Ma, Huadong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (06) : 6650 - 6666
  • [2] Privacy-Preserving Data Collection for Mobile Phone Sensing Tasks
    Liu, Yi-Ning
    Wang, Yan-Ping
    Wang, Xiao-Fen
    Xia, Zhe
    Xu, Jingfang
    INFORMATION SECURITY PRACTICE AND EXPERIENCE (ISPEC 2018), 2018, 11125 : 506 - 518
  • [3] Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition
    Ahuja, Karan
    Jiang, Yue
    Goel, Mayank
    Harrison, Chris
    CHI '21: PROCEEDINGS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2021,
  • [4] Generating a trading strategy in the financial market from sensitive expert data based on the privacy-preserving generative adversarial imitation network
    Chen, Hsin-Yi
    Huang, Szu-Hao
    NEUROCOMPUTING, 2022, 500 : 616 - 631
  • [5] G3R: Generating Rich and Fine-Grained mmWave Radar Data From 2D Videos for Generalized Gesture Recognition
    Deng, Kaikai
    Zhao, Dong
    Zheng, Wenxin
    Ling, Yue
    Yin, Kangwen
    Ma, Huadong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (04) : 2917 - 2934