HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning

被引:0
|
作者
Zhang, Xiao [1 ,2 ]
Yu, Hongzheng [1 ]
Yang, Yang [4 ]
Gu, Jingjing [5 ]
Li, Yujun [3 ]
Zhuang, Fuzhen [6 ,7 ]
Yu, Dongxiao [1 ]
Ren, Zhaochun [1 ]
机构
[1] Shandong Univ, Sch Comp Sci & Technol, Qingdao 266237, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[3] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
[4] Nanjing Univ Sci & Technol, Nanjing 210014, Peoples R China
[5] Nanjing Univ Aeronaut & Astronaut, Nanjing 210016, Peoples R China
[6] Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R China
[7] Chinese Acad Sci, Xiamen Data Intelligence Acad ICT, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Training; Data models; Activity recognition; Correlation; Intelligent sensors; Training data; Catastrophic forgetting; incremental learning; human activity recognition; mobile device; multi-modality;
D O I
10.1109/JBHI.2021.3085602
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, with the development of various kinds of sensors in smartphones or wearable devices, human activity recognition (HAR) has been widely researched and has numerous applications in healthcare, smart city, etc. Many techniques based on hand-crafted feature engineering or deep neural network have been proposed for sensor based HAR. However, these existing methods usually recognize activities offline, which means the whole data should be collected before training, occupying large-capacity storage space. Moreover, once the offline model training finished, the trained model can't recognize new activities unless retraining from the start, thus with a high cost of time and space. In this paper, we propose a multi-modality incremental learning model, called HarMI, with continuous learning ability. The proposed HarMI model can start training quickly with little storage space and easily learn new activities without storing previous training data. In detail, we first adopt attention mechanism to align heterogeneous sensor data with different frequencies. In addition, to overcome catastrophic forgetting in incremental learning, HarMI utilizes the elastic weight consolidation and canonical correlation analysis from a multi-modality perspective. Extensive experiments based on two public datasets demonstrate that HarMI can achieve a superior performance compared with several state-of-the-arts.
引用
收藏
页码:939 / 951
页数:13
相关论文
共 50 条
  • [41] Multi-Modality Mobile Image Recognition Based on Thermal and Visual Cameras
    Lai, Jui-Hsin
    Lin, Chung-Ching
    Chen, Chun-Fu
    Lin, Ching-Yung
    [J]. 2015 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2015, : 477 - 482
  • [42] Pedestrian recognition by using a kernel-based multi-modality approach
    Sirbu, Adela-Maria
    Rogozan, Alexandrina
    Diosan, Laura
    Bensrhair, Abdelaziz
    [J]. 16TH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING (SYNASC 2014), 2014, : 258 - 263
  • [43] Multi-modality, in vivo imaging of signal transduction pathway activity
    Blasberg, RG
    Doubrovin, MM
    Serganova, IS
    Ponomarev, VB
    Gelovani, JG
    [J]. FASEB JOURNAL, 2004, 18 (04): : A2 - A2
  • [44] A Novel Two-Stream Transformer-Based Framework for Multi-Modality Human Action Recognition
    Shi, Jing
    Zhang, Yuanyuan
    Wang, Weihang
    Xing, Bin
    Hu, Dasha
    Chen, Liangyin
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (04):
  • [45] Multi-concept multi-modality active learning for interactive video annotation
    Wang, Meng
    Hua, Xian-Sheng
    Song, Yan
    Tang, Jinhui
    Dai, Li-Rong
    [J]. ICSC 2007: INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING, PROCEEDINGS, 2007, : 321 - +
  • [46] Explainable multi-task learning for multi-modality biological data analysis
    Tang, Xin
    Zhang, Jiawei
    He, Yichun
    Zhang, Xinhe
    Lin, Zuwan
    Partarrieu, Sebastian
    Hanna, Emma Bou
    Ren, Zhaolin
    Shen, Hao
    Yang, Yuhong
    Wang, Xiao
    Li, Na
    Ding, Jie
    Liu, Jia
    [J]. NATURE COMMUNICATIONS, 2023, 14 (01)
  • [47] Incremental Cross-Modality Deep Learning for Pedestrian Recognition
    Pop, Danut Ovidiu
    Rogozan, Alexandrina
    Nashashibi, Fawzi
    Bensrhair, Abdelaziz
    [J]. 2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017), 2017, : 523 - 528
  • [48] Explainable multi-task learning for multi-modality biological data analysis
    Xin Tang
    Jiawei Zhang
    Yichun He
    Xinhe Zhang
    Zuwan Lin
    Sebastian Partarrieu
    Emma Bou Hanna
    Zhaolin Ren
    Hao Shen
    Yuhong Yang
    Xiao Wang
    Na Li
    Jie Ding
    Jia Liu
    [J]. Nature Communications, 14 (1)
  • [49] INTERACTIVE VIDEO ANNOTATION BY MULTI-CONCEPT MULTI-MODALITY ACTIVE LEARNING
    Wang, Meng
    Hua, Xian-Sheng
    Mei, Tao
    Tang, Jinhui
    Qi, Guo-Jun
    Song, Yan
    Dai, Li-Rong
    [J]. INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2007, 1 (04) : 459 - 477
  • [50] Multi-Modality Tensor Fusion Based Human Fatigue Detection
    Ha, Jongwoo
    Ryu, Joonhyuck
    Ko, Joonghoon
    [J]. ELECTRONICS, 2023, 12 (15)