HarMI: Human Activity Recognition Via Multi-Modality Incremental Learning

被引:0
|
作者
Zhang, Xiao [1 ,2 ]
Yu, Hongzheng [1 ]
Yang, Yang [4 ]
Gu, Jingjing [5 ]
Li, Yujun [3 ]
Zhuang, Fuzhen [6 ,7 ]
Yu, Dongxiao [1 ]
Ren, Zhaochun [1 ]
机构
[1] Shandong Univ, Sch Comp Sci & Technol, Qingdao 266237, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210093, Peoples R China
[3] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
[4] Nanjing Univ Sci & Technol, Nanjing 210014, Peoples R China
[5] Nanjing Univ Aeronaut & Astronaut, Nanjing 210016, Peoples R China
[6] Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R China
[7] Chinese Acad Sci, Xiamen Data Intelligence Acad ICT, Xiamen, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Training; Data models; Activity recognition; Correlation; Intelligent sensors; Training data; Catastrophic forgetting; incremental learning; human activity recognition; mobile device; multi-modality;
D O I
10.1109/JBHI.2021.3085602
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, with the development of various kinds of sensors in smartphones or wearable devices, human activity recognition (HAR) has been widely researched and has numerous applications in healthcare, smart city, etc. Many techniques based on hand-crafted feature engineering or deep neural network have been proposed for sensor based HAR. However, these existing methods usually recognize activities offline, which means the whole data should be collected before training, occupying large-capacity storage space. Moreover, once the offline model training finished, the trained model can't recognize new activities unless retraining from the start, thus with a high cost of time and space. In this paper, we propose a multi-modality incremental learning model, called HarMI, with continuous learning ability. The proposed HarMI model can start training quickly with little storage space and easily learn new activities without storing previous training data. In detail, we first adopt attention mechanism to align heterogeneous sensor data with different frequencies. In addition, to overcome catastrophic forgetting in incremental learning, HarMI utilizes the elastic weight consolidation and canonical correlation analysis from a multi-modality perspective. Extensive experiments based on two public datasets demonstrate that HarMI can achieve a superior performance compared with several state-of-the-arts.
引用
收藏
页码:939 / 951
页数:13
相关论文
共 50 条
  • [1] Multi-modality learning for human action recognition
    Ziliang Ren
    Qieshi Zhang
    Xiangyang Gao
    Pengyi Hao
    Jun Cheng
    [J]. Multimedia Tools and Applications, 2021, 80 : 16185 - 16203
  • [2] Multi-modality learning for human action recognition
    Ren, Ziliang
    Zhang, Qieshi
    Gao, Xiangyang
    Hao, Pengyi
    Cheng, Jun
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (11) : 16185 - 16203
  • [3] Human Action Recognition Via Multi-modality Information
    Gao, Zan
    Song, Jian-ming
    Zhang, Hua
    Liu, An-An
    Xue, Yan-Bing
    Xu, Guang-ping
    [J]. JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2014, 9 (02) : 739 - 748
  • [4] MULTI-MODALITY RECOGNITION OF HUMAN FACE AND EAR BASED ON DEEP LEARNING
    Fan, Ting-Yu
    Mu, Zhi-Chun
    Yang, Ru-Yin
    [J]. 2017 INTERNATIONAL CONFERENCE ON WAVELET ANALYSIS AND PATTERN RECOGNITION (ICWAPR), 2017, : 38 - 42
  • [5] OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment
    Cheng, Xize
    Jin, Tao
    Li, Linjun
    Lin, Wang
    Duan, Xinyu
    Zhao, Zhou
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6592 - 6607
  • [6] Joint multi-type feature learning for multi-modality FKP recognition
    Yang, Yeping
    Fei, Lunke
    Alshehri, Adel Homoud
    Zhao, Shuping
    Sun, Weijun
    Teng, Shaohua
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [7] Video Event Detection via Multi-modality Deep Learning
    Jhuo, I-Hong
    Lee, D. T.
    [J]. 2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 666 - 671
  • [8] TUMOR SEGMENTATION VIA MULTI-MODALITY JOINT DICTIONARY LEARNING
    Wang, Yan
    Yu, Biting
    Wang, Lei
    Zu, Chen
    Luo, Yong
    Wu, Xi
    Yang, Zhipeng
    Zhou, Jiliu
    Zhou, Luping
    [J]. 2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 1336 - 1339
  • [9] Multi-modality Fusion Network for Action Recognition
    Huang, Kai
    Qin, Zheng
    Xu, Kaiping
    Ye, Shuxiong
    Wang, Guolong
    [J]. ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2017, PT II, 2018, 10736 : 139 - 149
  • [10] MULTI-MODALITY AMERICAN SIGN LANGUAGE RECOGNITION
    Zhang, Chenyang
    Tian, Yingli
    Huenetfauth, Matt
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2881 - 2885