Easy Ensemble: Simple Deep Ensemble Learning for Sensor-Based Human Activity Recognition

被引:5
|
作者
Hasegawa, Tatsuhito [1 ]
Kondo, Kazuma [1 ,2 ]
机构
[1] Univ Fukui, Grad Sch Engn, Fundamental Engn Knowledge Based Soc, Fukui 9108507, Japan
[2] NEC Solut Innovators Ltd, Tokyo 1360082, Japan
基金
日本学术振兴会;
关键词
Context awareness; deep learning; ensemble learning; human activity recognition (HAR); CONVOLUTIONAL NEURAL-NETWORK; SMARTPHONE;
D O I
10.1109/JIOT.2022.3222221
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sensor-based human activity recognition (HAR) is a paramount technology in the Internet of Things services. HAR using representation learning, which automatically learns a feature representation from raw data, is the mainstream method because it is difficult to interpret relevant information from raw sensor data to design meaningful features. Ensemble learning is a robust approach to improve generalization performance; however, deep ensemble learning requires various procedures, such as data partitioning and training multiple models, which are time-consuming and computationally expensive. In this study, we propose an easy ensemble (EE) for HAR, which enables the easy implementation of deep ensemble learning in a single model. In addition, we propose various techniques (input variationer, stepwise ensemble, and channel shuffle) for the EE. Experiments on a benchmark data set for HAR demonstrated the effectiveness of EE and various techniques and their characteristics compared with conventional ensemble learning methods.
引用
收藏
页码:5506 / 5518
页数:13
相关论文
共 50 条
  • [1] Ensemble Approach for Sensor-Based Human Activity Recognition
    Brajesh, Sunidhi
    Ray, Indraneel
    UBICOMP/ISWC '20 ADJUNCT: PROCEEDINGS OF THE 2020 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2020 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2020, : 296 - 300
  • [2] Selective Ensemble Based on Extreme Learning Machine for Sensor-Based Human Activity Recognition
    Tian, Yiming
    Zhang, Jie
    Chen, Lingling
    Geng, Yanli
    Wang, Xitai
    SENSORS, 2019, 19 (16)
  • [3] Ensem-HAR: An Ensemble Deep Learning Model for Smartphone Sensor-Based Human Activity Recognition for Measurement of Elderly Health Monitoring
    Bhattacharya, Debarshi
    Sharma, Deepak
    Kim, Wonjoon
    Ijaz, Muhammad Fazal
    Singh, Pawan Kumar
    BIOSENSORS-BASEL, 2022, 12 (06):
  • [4] Deep learning and model personalization in sensor-based human activity recognition
    Ferrari A.
    Micucci D.
    Mobilio M.
    Napoletano P.
    Journal of Reliable Intelligent Environments, 2023, 9 (01) : 27 - 39
  • [5] Hand-Crafted Features With a Simple Deep Learning Architecture for Sensor-Based Human Activity Recognition
    Albadawi, Yaman
    Shanableh, Tamer
    IEEE SENSORS JOURNAL, 2024, 24 (17) : 28300 - 28313
  • [6] Deep learning for sensor-based activity recognition: A survey
    Wang, Jindong
    Chen, Yiqiang
    Hao, Shuji
    Peng, Xiaohui
    Hu, Lisha
    PATTERN RECOGNITION LETTERS, 2019, 119 : 3 - 11
  • [7] Ensemble Learning for Human Activity Recognition
    Sekiguchi, Ryoichi
    Abe, Kenji
    Yokoyama, Takumi
    Kumano, Masayasu
    Kawakatsu, Masaki
    UBICOMP/ISWC '20 ADJUNCT: PROCEEDINGS OF THE 2020 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2020 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2020, : 335 - 339
  • [8] Comprehensive machine and deep learning analysis of sensor-based human activity recognition
    Balaha, Hossam Magdy
    Hassan, Asmaa El-Sayed
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (17): : 12793 - 12831
  • [9] A Multitask Deep Learning Approach for Sensor-Based Human Activity Recognition and Segmentation
    Duan, Furong
    Zhu, Tao
    Wang, Jinqiang
    Chen, Liming
    Ning, Huansheng
    Wan, Yaping
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [10] Hybrid deep learning approaches for smartphone sensor-based human activity recognition
    Vasundhara Ghate
    Sweetlin Hemalatha C
    Multimedia Tools and Applications, 2021, 80 : 35585 - 35604