Modality Consistency-Guided Contrastive Learning for Wearable-Based Human Activity Recognition

被引:5
|
作者
Guo, Changru [1 ]
Zhang, Yingwei [2 ,3 ]
Chen, Yiqiang [3 ]
Xu, Chenyang [4 ]
Wang, Zhong [1 ]
机构
[1] Lanzhou Univ, Sch Comp Sci & Engn, Lanzhou 730000, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, Beijing Key Lab Mobile Comp & Pervas Device, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Beijing 100190, Peoples R China
[4] Tianjin Univ, Sch Comp Sci, Tianjin 300072, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 12期
关键词
Human activity recognition; Self-supervised learning; Task analysis; Data models; Time series analysis; Internet of Things; Face recognition; Contrastive learning (CL); human activity recognition (HAR); intermodality; intramodality; self-supervised; AUTHENTICATION PROTOCOL; RESOURCE-ALLOCATION; TRUST MODEL; SCHEME; COMMUNICATION; EFFICIENT; NETWORK; ACCESS; MANAGEMENT; SECURE;
D O I
10.1109/JIOT.2024.3379019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In wearable sensor-based human activity recognition (HAR) research, some factors limit the development of generalized models, such as the time and resource consuming, to acquire abundant annotated data, and the interdata set inconsistency of activity category. In this article, we take advantage of the complementarity and redundancy between different wearable modalities (e.g., accelerometers, gyroscopes, and magnetometers), and propose a modality consistency-guided contrastive learning (ModCL) method, which can construct a generalized model using annotation-free self-supervised learning and realize personalized domain adaptation with small amount annotation data. Specifically, ModCL exploits both intramodality and intermodality consistency of the wearable device data to construct contrastive learning tasks, encouraging the recognition model to recognize similar patterns and distinguish dissimilar ones. By leveraging these mixed constraint strategies, ModCL can learn the inherent activity patterns and extract meaningful generalized features across different data sets. To verify the effectiveness of ModCL method, we conduct experiments on five benchmark data sets (i.e., OPPORTUNITY and PAMAP2 as pretraining data sets, while UniMiB-SHAR, UCI-HAR, and WISDM as independent validation data sets). Experimental results show that ModCL achieves significant improvements in recognition accuracy compared with other state-of-the-art methods.
引用
收藏
页码:21750 / 21762
页数:13
相关论文
共 50 条
  • [1] Learning Disentangled Behaviour Patterns for Wearable-based Human Activity Recognition
    Su, Jie
    Wen, Zhenyu
    Lin, Tao
    Guan, Yu
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2022, 6 (01):
  • [2] The Impact of Data Reduction on Wearable-Based Human Activity Recognition
    Nourani, Hosein
    Shihab, Emad
    Sarbishei, Omid
    2019 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS (PERCOM WORKSHOPS), 2019, : 89 - 94
  • [3] Modality aware contrastive learning for multimodal human activity recognition
    Dixon, Sam
    Yao, Lina
    Davidson, Robert
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (16):
  • [4] Timestamp-Supervised Wearable-Based Activity Segmentation and Recognition With Contrastive Learning and Order-Preserving Optimal Transport
    Xia, Songpengcheng
    Chu, Lei
    Pei, Ling
    Yang, Jiarui
    Yu, Wenxian
    Qiu, Robert C.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 10734 - 10751
  • [5] HIERARCHICAL DEEP LEARNING MODEL WITH INERTIAL AND PHYSIOLOGICAL SENSORS FUSION FOR WEARABLE-BASED HUMAN ACTIVITY RECOGNITION
    Hwang, Dae Yon
    Ng, Pai Chet
    Yu, Yuanhao
    Wang, Yang
    Spachos, Petros
    Hatzinakos, Dimitrios
    Plataniotis, Konstantinos N.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 21 - 25
  • [6] Wearable-based behaviour interpolation for semi-supervised human activity recognition
    Duan, Haoran
    Wang, Shidong
    Ojha, Varun
    Wang, Shizheng
    Huang, Yawen
    Long, Yang
    Ranjan, Rajiv
    Zheng, Yefeng
    INFORMATION SCIENCES, 2024, 665
  • [7] OKRELM: online kernelized and regularized extreme learning machine for wearable-based activity recognition
    Hu, Lisha
    Chen, Yiqiang
    Wang, Jindong
    Hu, Chunyu
    Jiang, Xinlong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2018, 9 (09) : 1577 - 1590
  • [8] OKRELM: online kernelized and regularized extreme learning machine for wearable-based activity recognition
    Lisha Hu
    Yiqiang Chen
    Jindong Wang
    Chunyu Hu
    Xinlong Jiang
    International Journal of Machine Learning and Cybernetics, 2018, 9 : 1577 - 1590
  • [9] Improving Wearable-Based Activity Recognition Using Image Representations
    Guinea, Alejandro Sanchez
    Sarabchian, Mehran
    Muehlhaeuser, Max
    SENSORS, 2022, 22 (05)
  • [10] What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks?
    Qian, Hangwei
    Tian, Tian
    Miao, Chunyan
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 3761 - 3771