AutoDLAR: A Semi-supervised Cross-modal Contact-free Human Activity Recognition System

被引:0
|
作者
Lu, Xinxin [1 ]
Wang, Lei [2 ,3 ]
Lin, Chi [2 ,3 ]
Fan, Xin [4 ]
Han, Bin [1 ]
Han, Xin [1 ]
Qin, Zhenquan [2 ,3 ]
机构
[1] Dalian Univ Technol, Sch Software, 321 Tuqiang St, Dalian 116600, Liaoning, Peoples R China
[2] Dalian Univ Technol, Sch Software, Dalian, Peoples R China
[3] Key Lab Ubiquitous Network & Serv Software Liaoni, Dalian, Peoples R China
[4] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Engn, Dalian, Peoples R China
关键词
WiFi-based human activity recognition; cross-modal transfer; semi-supervised learning; WIFI;
D O I
10.1145/3607254
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
WiFi-based human activity recognition (HAR) plays an essential role in various applications such as security surveillance, health monitoring, and smart home. Existing HAR methods, though yielding promising performance in indoor scenarios, highly depend on a massive labeled dataset for training which is extremely difficult to acquire in practical applications. In this paper, we present an automatic data labeling and HAR system, termed AutoDLAR. Taking a semi-supervised cross-modal learning framework with a hybrid loss function as the core, AutoDLAR transfers rich visual information to automatically label WiFi signals for WiFi-based HAR. Specifically, we devise a lightweight and multi-view WiFi sensing model with a parallel feature embedding method to accurately identify activities and accelerate recognition speed. Then, we exploit the video data to fine-tune a well-established visual HAR model, generating effective pseudo-labels for guiding the WiFi model's training. We also build a synchronized Video-WiFi dataset with seven types of human activities under different scenarios to enable training and validating the semi-supervised HAR system. Extensive experiments on our collected activity dataset and the emotion recognition benchmark demonstrate that AutoDLAR attains an average accuracy of over 95.89% without manual labeling and only spends the inference time of 3.35 ms, outperforming the state-of-the-art (SOTA) methods.
引用
收藏
页数:20
相关论文
共 50 条
  • [11] Semi-supervised discrete hashing for efficient cross-modal retrieval
    Wang, Xingzhi
    Liu, Xin
    Peng, Shu-Juan
    Zhong, Bineng
    Chen, Yewang
    Du, Ji-Xiang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (35-36) : 25335 - 25356
  • [12] Adaptively Unified Semi-supervised Learning for Cross-Modal Retrieval
    Zhang, Liang
    Ma, Bingpeng
    He, Jianfeng
    Li, Guorong
    Huang, Qingming
    Tian, Qi
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3406 - 3412
  • [13] Semi-supervised discrete hashing for efficient cross-modal retrieval
    Xingzhi Wang
    Xin Liu
    Shu-Juan Peng
    Bineng Zhong
    Yewang Chen
    Ji-Xiang Du
    Multimedia Tools and Applications, 2020, 79 : 25335 - 25356
  • [14] Semi-Supervised Cross-Modal Retrieval Based on Discriminative Comapping
    Liu, Li
    Dong, Xiao
    Wang, Tianshi
    COMPLEXITY, 2020, 2020
  • [15] LABEL PREDICTION FRAMEWORK FOR SEMI-SUPERVISED CROSS-MODAL RETRIEVAL
    Mandal, Devraj
    Rao, Pramod
    Biswas, Soma
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2311 - 2315
  • [16] Adaptive Semi-Supervised Feature Selection for Cross-Modal Retrieval
    Yu, En
    Sun, Jiande
    Li, Jing
    Chang, Xiaojun
    Han, Xian-Hua
    Hauptmann, Alexander G.
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (05) : 1276 - 1288
  • [17] Semi-supervised Cross-Modal Hashing with Graph Convolutional Networks
    Duan, Jiasheng
    Luo, Yadan
    Wang, Ziwei
    Huang, Zi
    DATABASES THEORY AND APPLICATIONS, ADC 2020, 2020, 12008 : 93 - 104
  • [18] Semi-supervised cross-modal hashing via modality-specific and cross-modal graph convolutional networks
    Wu, Fei
    Li, Shuaishuai
    Gao, Guangwei
    Ji, Yimu
    Jing, Xiao-Yuan
    Wan, Zhiguo
    PATTERN RECOGNITION, 2023, 136
  • [19] Semi-supervised semantic factorization hashing for fast cross-modal retrieval
    Wang, Jiale
    Li, Guohui
    Pan, Peng
    Zhao, Xiaosong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (19) : 20197 - 20215
  • [20] Semantic Consistency Cross-Modal Retrieval With Semi-Supervised Graph Regularization
    Xu, Gongwen
    Li, Xiaomei
    Zhang, Zhijun
    IEEE ACCESS, 2020, 8 : 14278 - 14288