XFall: Domain Adaptive Wi-Fi-Based Fall Detection With Cross-Modal Supervision

被引:3
|
作者
Chi, Guoxuan [1 ,2 ]
Zhang, Guidong [1 ,2 ]
Ding, Xuan [1 ,2 ]
Ma, Qiang [1 ,2 ]
Yang, Zheng [1 ,2 ]
Du, Zhenguo [3 ]
Xiao, Houfei [3 ]
Liu, Zhuang [3 ]
机构
[1] Tsinghua Univ, Sch Software, Beijing 100084, Peoples R China
[2] Tsinghua Univ, BNRist, Beijing 100084, Peoples R China
[3] Huawei Technol Co Ltd, Shenzhen 518129, Peoples R China
关键词
Fall detection; Feature extraction; Wireless fidelity; Sensors; Training; Wireless sensor networks; Wireless communication; Domain adaptation; fall detection; statistical electric field; transformer encoder; cross-modal supervision; DETECTION SYSTEM;
D O I
10.1109/JSAC.2024.3413997
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recent years have witnessed an increasing demand for human fall detection systems. Among all existing methods, Wi-Fi-based fall detection has become one of the most promising solutions due to its pervasiveness. However, when applied to a new domain, existing Wi-Fi-based solutions suffer from severe performance degradation caused by low generalizability. In this paper, we propose XFall, a domain-adaptive fall detection system based on Wi-Fi. XFall overcomes the generalization problem from three aspects. To advance cross-environment sensing, XFall exploits an environment-independent feature called speed distribution profile, which is irrelevant to indoor layout and device deployment. To ensure sensitivity across all fall types, an attention-based encoder is designed to extract the general fall representation by associating both the spatial and temporal dimensions of the input. To train a large model with limited amounts of Wi-Fi data, we design a cross-modal learning framework, adopting a pre-trained visual model for supervision during the training process. We implement and evaluate XFall on one of the latest commercial wireless products through a year-long deployment in real-world settings. The result shows XFall achieves an overall accuracy of 96.8%, with a miss alarm rate of 3.1% and a false alarm rate of 3.3%, outperforming the state-of-the-art solutions in both in-domain and cross-domain evaluation.
引用
收藏
页码:2457 / 2471
页数:15
相关论文
共 50 条
  • [41] CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D Object Detection
    Chang, Gyusam
    Roh, Wonseok
    Jang, Sujin
    Lee, Dongwook
    Ji, Daehyun
    Oh, Gyeongrok
    Park, Jinsun
    Kim, Jinkyu
    Kim, Sangpil
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 972 - 980
  • [42] Cross-modal feature extraction and integration based RGBD saliency detection
    Pan, Liang
    Zhou, Xiaofei
    Shi, Ran
    Zhang, Jiyong
    Yan, Chenggang
    IMAGE AND VISION COMPUTING, 2020, 101
  • [43] PAN: Prototype-based Adaptive Network for Robust Cross-modal Retrieval
    Zeng, Zhixiong
    Wang, Shuai
    Xu, Nan
    Mao, Wenji
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 1125 - 1134
  • [44] ARIF: An Adaptive Attention-Based Cross-Modal Representation Integration Framework
    Liu, Chengzhi
    Luo, Zihong
    Bi, Yifei
    Huang, Zile
    Shu, Dong
    Hou, Jiheng
    Wang, Hongchen
    Liang, Kaiyu
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VI, 2024, 15021 : 3 - 18
  • [45] AFMCT: adaptive fusion module based on cross-modal transformer block for 3D object detection
    Bingli Zhang
    Yixin Wang
    Chengbiao Zhang
    Junzhao Jiang
    Zehao Pan
    Jin Cheng
    Yangyang Zhang
    Xinyu Wang
    Chenglei Yang
    Yanhui Wang
    Machine Vision and Applications, 2024, 35
  • [46] Adaptive Label Correlation Based Asymmetric Discrete Hashing for Cross-Modal Retrieval
    Li, Huaxiong
    Zhang, Chao
    Jia, Xiuyi
    Gao, Yang
    Chen, Chunlin
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (02) : 1185 - 1199
  • [47] AFMCT: adaptive fusion module based on cross-modal transformer block for 3D object detection
    Zhang, Bingli
    Wang, Yixin
    Zhang, Chengbiao
    Jiang, Junzhao
    Pan, Zehao
    Cheng, Jin
    Zhang, Yangyang
    Wang, Xinyu
    Yang, Chenglei
    Wang, Yanhui
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [48] GCANet: A Cross-Modal Pedestrian Detection Method Based on Gaussian Cross Attention Network
    Peng, Peiran
    Mu, Feng
    Yan, Peilin
    Song, Liqiang
    Li, Hui
    Chen, Yu
    Li, Jianan
    Xu, Tingfa
    INTELLIGENT COMPUTING, VOL 2, 2022, 507 : 520 - 530
  • [49] Multi-Modal Sarcasm Detection Based on Cross-Modal Composition of Inscribed Entity Relations
    Li, Lingshan
    Jin, Di
    Wang, Xiaobao
    Guo, Fengyu
    Wang, Longbiao
    Dang, Jianwu
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 918 - 925
  • [50] SymforNet: application of cross-modal information correspondences based on self-supervision in symbolic music generation
    Abudukelimu, Halidanmu
    Chen, Jishang
    Liang, Yunze
    Abulizi, Abudukelimu
    Yasen, Alimujiang
    APPLIED INTELLIGENCE, 2024, 54 (05) : 4140 - 4152