Opportunistic Activity Recognition in IoT Sensor Ecosystems via Multimodal Transfer Learning

被引:0
|
作者
Oresti Banos
Alberto Calatroni
Miguel Damas
Hector Pomares
Daniel Roggen
Ignacio Rojas
Claudia Villalonga
机构
[1] Research Center for Information and Communication Technologies of the University of Granada (CITIC-UGR),Department of Computer Architecture and Computer Technology
[2] Bonsai Systems GmbH,Sensor Technology Research Centre
[3] University of Sussex,School of Engineering and Technology
[4] Universidad Internacional de la Rioja,undefined
来源
Neural Processing Letters | 2021年 / 53卷
关键词
Transfer learning; Multimodal sensors; Wearable sensors; Ambient sensors; Activity recognition; Human–computer Interaction;
D O I
暂无
中图分类号
学科分类号
摘要
Recognizing human activities seamlessly and ubiquitously is now closer than ever given the myriad of sensors readily deployed on and around users. However, the training of recognition systems continues to be both time and resource-consuming, as datasets must be collected ad-hoc for each specific sensor setup a person may encounter in their daily life. This work presents an alternate approach based on transfer learning to opportunistically train new unseen or target sensor systems from existing or source sensor systems. The approach uses system identification techniques to learn a mapping function that automatically translates the signals from the source sensor domain to the target sensor domain, and vice versa. This can be done for sensor signals of the same or cross modality. Two transfer models are proposed to translate recognition systems based on either activity templates or activity models, depending on the characteristics of both source and target sensor systems. The proposed transfer methods are evaluated in a human–computer interaction scenario, where the transfer is performed in between wearable sensors placed at different body locations, and in between wearable sensors and an ambient depth camera sensor. Results show that a good transfer is possible with just a few seconds of data, irrespective of the direction of the transfer and for similar and cross sensor modalities.
引用
收藏
页码:3169 / 3197
页数:28
相关论文
共 50 条
  • [41] CNN-based Sensor Fusion Techniques for Multimodal Human Activity Recognition
    Muenzner, Sebastian
    Schmidt, Philip
    Reiss, Attila
    Hanselmann, Michael
    Stiefelhagen, Rainer
    Duerichen, Robert
    PROCEEDINGS OF THE 2017 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS (ISWC 17), 2017, : 158 - 165
  • [42] Multimodal Sensor Data Fusion and Ensemble Modeling for Human Locomotion Activity Recognition
    Oh, Se Won
    Jeong, Hyuntae
    Chung, Seungeun
    Lim, Jeong Mook
    Noh, Kyoung Ju
    ADJUNCT PROCEEDINGS OF THE 2023 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING & THE 2023 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTING, UBICOMP/ISWC 2023 ADJUNCT, 2023, : 546 - 550
  • [43] Cross-dataset activity recognition via adaptive spatial-temporal transfer learning
    Qin X.
    Chen Y.
    Wang J.
    Yu C.
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019, 3 (04)
  • [44] Multimodal Deep Learning for Group Activity Recognition in Smart Office Environments
    Florea, George Albert
    Mihailescu, Radu-Casian
    FUTURE INTERNET, 2020, 12 (08):
  • [45] Deep Learning for Heterogeneous Human Activity Recognition in Complex IoT Applications
    Abdel-Basset, Mohamed
    Hawash, Hossam
    Chang, Victor
    Chakrabortty, Ripon K.
    Ryan, Michael
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (08) : 5653 - 5665
  • [46] Multimodal Multi-stream Deep Learning for Egocentric Activity Recognition
    Song, Sibo
    Chandrasekhar, Vijay
    Mandal, Bappaditya
    Li, Liyuan
    Lim, Joo-Hwee
    Babu, Giduthuri Sateesh
    San, Phyo Phyo
    Cheung, Ngai-Man
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 378 - 385
  • [47] A Deep Learning and Multimodal Ambient Sensing Framework for Human Activity Recognition
    Yachir, Ali
    Amamra, Abdenour
    Djamaa, Badis
    Zerrouki, Ali
    Amour, Ahmed KhierEddine
    PROCEEDINGS OF THE 2019 FEDERATED CONFERENCE ON COMPUTER SCIENCE AND INFORMATION SYSTEMS (FEDCSIS), 2019, : 101 - 105
  • [48] Multimodal Learning for Human Action Recognition Via Bimodal/Multimodal Hybrid Centroid Canonical Correlation Analysis
    Elmadany, Nour El Din
    He, Yifeng
    Guan, Ling
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (05) : 1317 - 1331
  • [49] Weakly-supervised sensor-based activity segmentation and recognition via learning from distributions
    Qian, Hangwei
    Pan, Sinno Jialin
    Miao, Chunyan
    ARTIFICIAL INTELLIGENCE, 2021, 292
  • [50] Transfer Learning Based Method for Human Activity Recognition
    Zebhi, Saeedeh
    AlModarresi, S. M. T.
    Abootalebi, Vahid
    2021 29TH IRANIAN CONFERENCE ON ELECTRICAL ENGINEERING (ICEE), 2021, : 761 - 765