Modality Consistency-Guided Contrastive Learning for Wearable-Based Human Activity Recognition

被引:5
|
作者
Guo, Changru [1 ]
Zhang, Yingwei [2 ,3 ]
Chen, Yiqiang [3 ]
Xu, Chenyang [4 ]
Wang, Zhong [1 ]
机构
[1] Lanzhou Univ, Sch Comp Sci & Engn, Lanzhou 730000, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, Beijing Key Lab Mobile Comp & Pervas Device, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Tianjin Univ, Sch Comp Sci, Tianjin 300072, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 12期
关键词
Human activity recognition; Self-supervised learning; Task analysis; Data models; Time series analysis; Internet of Things; Face recognition; Contrastive learning (CL); human activity recognition (HAR); intermodality; intramodality; self-supervised; AUTHENTICATION PROTOCOL; RESOURCE-ALLOCATION; TRUST MODEL; SCHEME; COMMUNICATION; EFFICIENT; NETWORK; ACCESS; MANAGEMENT; SECURE;
D O I
10.1109/JIOT.2024.3379019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In wearable sensor-based human activity recognition (HAR) research, some factors limit the development of generalized models, such as the time and resource consuming, to acquire abundant annotated data, and the interdata set inconsistency of activity category. In this article, we take advantage of the complementarity and redundancy between different wearable modalities (e.g., accelerometers, gyroscopes, and magnetometers), and propose a modality consistency-guided contrastive learning (ModCL) method, which can construct a generalized model using annotation-free self-supervised learning and realize personalized domain adaptation with small amount annotation data. Specifically, ModCL exploits both intramodality and intermodality consistency of the wearable device data to construct contrastive learning tasks, encouraging the recognition model to recognize similar patterns and distinguish dissimilar ones. By leveraging these mixed constraint strategies, ModCL can learn the inherent activity patterns and extract meaningful generalized features across different data sets. To verify the effectiveness of ModCL method, we conduct experiments on five benchmark data sets (i.e., OPPORTUNITY and PAMAP2 as pretraining data sets, while UniMiB-SHAR, UCI-HAR, and WISDM as independent validation data sets). Experimental results show that ModCL achieves significant improvements in recognition accuracy compared with other state-of-the-art methods.
引用
收藏
页码:21750 / 21762
页数:13
相关论文
共 50 条
  • [31] Self-Supervised Contrastive Learning for Radar-Based Human Activity Recognition
    Rahman, Mohammad Mahbubur
    Gurbuz, Sevgi Zubeyde
    2023 IEEE RADAR CONFERENCE, RADARCONF23, 2023,
  • [32] Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition
    Khaertdinov, Bulat
    Ghaleb, Esam
    Asteriadis, Stylianos
    2021 INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2021), 2021,
  • [33] Are we ready for wearable-based global physical activity surveillance?
    Gupta, Nidhi
    Crowley, Patrick
    Holtermann, Andreas
    Straker, Leon
    Stamatakis, Emmanuel
    Ding, Ding
    BRITISH JOURNAL OF SPORTS MEDICINE, 2024, 58 (07) : 356 - 358
  • [34] Wearable-Based Stair Climb Power Estimation and Activity Classification
    Psaltos, Dimitrios J.
    Mamashli, Fahimeh
    Adamusiak, Tomasz
    Demanuele, Charmaine
    Santamaria, Mar
    Czech, Matthew D.
    SENSORS, 2022, 22 (17)
  • [35] Consistency-Guided Meta-learning for Bootstrapping Semi-supervised Medical Image Segmentation
    Wei, Qingyue
    Yu, Lequan
    Li, Xianhang
    Shao, Wei
    Xie, Cihang
    Xing, Lei
    Zhou, Yuyin
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT IV, 2023, 14223 : 183 - 193
  • [36] Deep learning based multimodal complex human activity recognition using wearable devices
    Chen, Ling
    Liu, Xiaoze
    Peng, Liangying
    Wu, Menghan
    APPLIED INTELLIGENCE, 2021, 51 (06) : 4029 - 4042
  • [37] Wearable Sensor-Based Human Activity Recognition with Hybrid Deep Learning Model
    Luwe, Yee Jia
    Lee, Chin Poo
    Lim, Kian Ming
    INFORMATICS-BASEL, 2022, 9 (03):
  • [38] Sensor Data Augmentation by Resampling in Contrastive Learning for Human Activity Recognition
    Wang, Jinqiang
    Zhu, Tao
    Gan, Jingyuan
    Chen, Liming Luke
    Ning, Huansheng
    Wan, Yaping
    IEEE SENSORS JOURNAL, 2022, 22 (23) : 22994 - 23008
  • [39] HL-HAR: Hierarchical Learning Based Human Activity Recognition in Wearable Computing
    Liu, Yan
    Zhao, Wentao
    Liu, Qiang
    Yu, Linyuan
    Wang, Dongxu
    CLOUD COMPUTING AND SECURITY, PT II, 2017, 10603 : 684 - 693
  • [40] Deep learning based multimodal complex human activity recognition using wearable devices
    Ling Chen
    Xiaoze Liu
    Liangying Peng
    Menghan Wu
    Applied Intelligence, 2021, 51 : 4029 - 4042