ContrasGAN: Unsupervised domain adaptation in Human Activity Recognition via adversarial and contrastive learning

被引:15
|
作者
Sanabria, Andrea Rosales [1 ]
Zambonelli, Franco [2 ]
Dobson, Simon [1 ]
Ye, Juan [1 ]
机构
[1] Univ St Andrews, Sch Comp Sci, St Andrews, Fife, Scotland
[2] Univ Modena & Reggio Emilia, Dipartimento Sci & Metodi Ingn, Modena Mo, Italy
关键词
Human activity recognition; Unsupervised domain adaptation; GAN; Contrastive loss; KERNEL;
D O I
10.1016/j.pmcj.2021.101477
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human Activity Recognition (HAR) makes it possible to drive applications directly from embedded and wearable sensors. Machine learning, and especially deep learning, has made significant progress in learning sensor features from raw sensing signals with high recognition accuracy. However, most techniques need to be trained on a large labelled dataset, which is often difficult to acquire. In this paper, we present ContrasGAN, an unsupervised domain adaptation technique that addresses this labelling challenge by transferring an activity model from one labelled domain to other unlabelled domains. ContrasGAN uses bi-directional generative adversarial networks for heterogeneous feature transfer and contrastive learning to capture distinctive features between classes. We evaluate ContrasGAN on three commonly-used HAR datasets under conditions of cross-body, cross-user, and cross-sensor transfer learning. Experimental results show a superior performance of ContrasGAN on all these tasks over a number of state-of-the-art techniques, with relatively low computational cost. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] UNSUPERVISED DOMAIN ADAPTATION VIA DOMAIN ADVERSARIAL TRAINING FOR SPEAKER RECOGNITION
    Wang, Qing
    Rao, Wei
    Sun, Sining
    Xie, Lei
    Chng, Eng Siong
    Li, Haizhou
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4889 - 4893
  • [2] Unsupervised Domain Adaptation for Human Activity Recognition
    Barbosa, Paulo
    Garcia, Kemilly Dearo
    Mendes-Moreira, Joao
    de Carvalho, Andre C. P. L. F.
    [J]. INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2018, PT I, 2018, 11314 : 623 - 630
  • [3] Improving Unsupervised Domain Adaptation via Multiple Adversarial Learning
    Cao, Yu-Dong
    Hang, Shuang-Jiang
    Jia, Xu
    [J]. Journal of Computers (Taiwan), 2023, 34 (05) : 73 - 85
  • [4] Domain Confused Contrastive Learning for Unsupervised Domain Adaptation
    Long, Quanyu
    Luo, Tianze
    Wang, Wenya
    Pan, Sinno Jialin
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2982 - 2995
  • [5] Adversarial domain adaptation using contrastive learning
    Azuma, Chiori
    Ito, Tomoyoshi
    Shimobaba, Tomoyoshi
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 123
  • [6] Unsupervised Domain Adaptation Via Contrastive Adversarial Domain Mixup: A Case Study on COVID-19
    Zeng H.
    Yue Z.
    Shang L.
    Zhang Y.
    Wang D.
    [J]. IEEE Transactions on Emerging Topics in Computing, 2024, 12 (04): : 1 - 12
  • [7] Contrastive Adversarial Domain Adaptation Networks for Speaker Recognition
    Li, Longxin
    Mak, Man-Wai
    Chien, Jen-Tzung
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (05) : 2236 - 2245
  • [8] Unsupervised Domain Adaptation for Human Activity Recognition in Radar
    Li, Xinyu
    Jing, Xiaojun
    He, Yuan
    [J]. 2020 IEEE RADAR CONFERENCE (RADARCONF20), 2020,
  • [9] Adversarial Reinforcement Learning for Unsupervised Domain Adaptation
    Zhang, Youshan
    Ye, Hui
    Davison, Brian D.
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 635 - 644
  • [10] Cross-Domain Contrastive Learning for Unsupervised Domain Adaptation
    Wang, Rui
    Wu, Zuxuan
    Weng, Zejia
    Chen, Jingjing
    Qi, Guo-Jun
    Jiang, Yu-Gang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1665 - 1673