CSI-Based Location-Independent Human Activity Recognition by Contrast Between Dual Stream Fusion Features

被引:0
|
作者
Wang, Yujie [1 ]
Yu, Guangwei [2 ]
Zhang, Yong [2 ]
Liu, Dun [2 ]
Zhang, Yang [3 ]
机构
[1] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing 100083, Peoples R China
[2] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230001, Peoples R China
[3] Univ Manchester, Sch Comp Sci, Manchester M13 9PL, England
基金
中国国家自然科学基金;
关键词
Contrastive learning; channel state information (CSI); feature fusion; recognition;
D O I
10.1109/JSEN.2024.3504005
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the fact that channel state information (CSI) data contains activity and environmental information, the features of the same activity vary significantly across different locations. Existing CSI-based human activity recognition (HAR) systems achieve high recognition accuracy at training locations through mechanisms such as transfer learning and few-shot learning when learning new activities. However, they struggle to maintain accurate activity recognition at other locations. In this article, we propose a contrastive fusion feature-based location-independent HAR (CFLH) system to address this issue. Unlike existing methods that simultaneously train feature extractor and fully connected layer classifier, CFLH system decouples the training of the feature extractor and classifier. It only requires obtaining loss through contrastive learning at the feature level to optimize the feature extractor. CFLH system randomly scales activity signals in the temporal dimension to enrich intra and interclass features across different locations, constructing positive samples. Using labels, samples from different activity categories are treated as negative samples to expand interclass feature differences. For more effective activity feature extraction, CFLH system employs a two-tower transformer to extract temporal and channel-stream features. These two features are then fused into a dual-stream fusion feature using an attention and residual-based fusion module (AR-Fusion). Experimental results show that using samples of three activities from 12 points to train the feature extractor, and adding samples of three new activities at training points to train the classifier, the highest recognition accuracy for three new and old activities at the testing location reaches 94.48% and 95.71%, respectively.
引用
收藏
页码:4897 / 4907
页数:11
相关论文
共 47 条
  • [41] A Dual-Modal Human Activity Recognition System Based on Deep Learning and Data Fusion
    Sun, Qingquan
    Ayyagari, Sai Kalyan
    Hou, Yunfei
    Dajani, Khalil
    2024 7TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND BIG DATA, ICAIBD 2024, 2024, : 430 - 434
  • [42] Feature Fusion: H-ELM based Learned Features and Hand-Crafted Features for Human Activity Recognition
    AlDahoul, Nouar
    Akmeliawati, Rini
    Htike, Zaw Zaw
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2019, 10 (07) : 509 - 514
  • [43] Abnormal human activity recognition system based on R-transform and independent component features for elderly healthcare
    Khan, Zafar Ali
    Sohn, Won
    JOURNAL OF THE CHINESE INSTITUTE OF ENGINEERS, 2013, 36 (04) : 441 - 451
  • [44] Shots segmentation-based optimized dual-stream framework for robust human activity recognition in surveillance video
    Hussain, Altaf
    Khan, Samee Ullah
    Khan, Noman
    Ullah, Waseem
    Alkhayyat, Ahmed
    Alharbi, Meshal
    Baik, Sung Wook
    ALEXANDRIA ENGINEERING JOURNAL, 2024, 91 : 632 - 647
  • [45] Human activity recognition algorithm in video sequences based on the fusion of multiple features for realistic and multi-view environment
    Arati Kushwaha
    Ashish Khare
    Om Prakash
    Multimedia Tools and Applications, 2024, 83 : 22727 - 22748
  • [46] Human activity recognition algorithm in video sequences based on the fusion of multiple features for realistic and multi-view environment
    Kushwaha, Arati
    Khare, Ashish
    Prakash, Om
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (8) : 22727 - 22748
  • [47] Sensor-Based Human Activity Recognition Based on Multi-Stream Time-Varying Features With ECA-Net Dimensionality Reduction
    Miah, Abu Saleh Musa
    Hwang, Yong Seok
    Shin, Jungpil
    IEEE ACCESS, 2024, 12 : 151649 - 151668