Federated Multi-task Learning with Hierarchical Attention for Sensor Data Analytics

被引:5
|
作者
Chen, Yujing [1 ]
Ning, Yue [2 ]
Chai, Zheng [1 ]
Rangwala, Huzefa [1 ]
机构
[1] George Mason Univ, Dept Comp Sci, Fairfax, VA 22030 USA
[2] Stevens Inst Technol, Dept Comp Sci, Hoboken, NJ 07030 USA
关键词
Sensor analytics; Attention mechanism; Multi-task learning;
D O I
10.1109/ijcnn48605.2020.9207508
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The past decade has been marked by the rapid emergence and proliferation of a myriad of small devices, such as smartphones and wearables. There is a critical need for analysis of multivariate temporal data obtained from sensors on these devices. Given the heterogeneity of sensor data, individual devices may not have sufficient quality data to learn an effective model. Factors such as skewed/varied data distributions bring more difficulties to the sensor data analytics. In this paper, we propose to leverage multi-task learning with attention mechanism to perform inductive knowledge transfer among related devices and improve generalization performance. We design a novel federated multi-task hierarchical attention model (FATHOM) that jointly trains classification/regression models from multiple distributed devices. The attention mechanism in the proposed model seeks to extract feature representations from inputs and to learn a shared representation across multiple devices to identify key features at each time step. The underlying temporal and nonlinear relationships are modeled using a combination of attention mechanism and long short-term memory (LSTM) networks. The proposed method outperforms a wide range of competitive baselines in both classification and regression settings on three unbalanced real-world datasets. It also allows for the visual characterization of key features learned at the input task level and the global temporal level.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Hierarchical Gaussian Processes model for multi-task learning
    Li, Ping
    Chen, Songcan
    PATTERN RECOGNITION, 2018, 74 : 134 - 144
  • [32] Multi-Task Hierarchical Imitation Learning for Home Automation
    Fox, Roy
    Berenstein, Ron
    Stoica, Ion
    Goldberg, Ken
    2019 IEEE 15TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2019, : 1380 - 1387
  • [33] Hierarchical Multi-Task Learning for Healthy Drink Classification
    Park, Homin
    Bharadhwaj, Homanga
    Lim, Brian Y.
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [34] Multi-task Learning for Author Profiling with Hierarchical Features
    Jiang, Zhile
    Yu, Shuai
    Qu, Qiang
    Yang, Min
    Luo, Junyu
    Liu, Juncheng
    COMPANION PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2018 (WWW 2018), 2018, : 55 - 56
  • [35] ENHANCE RNNLMS WITH HIERARCHICAL MULTI-TASK LEARNING FOR ASR
    Song, Minguang
    Zhao, Yunxin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6102 - 6106
  • [36] Multi-task Hierarchical Adversarial Inverse Reinforcement Learning
    Chen, Jiayu
    Tamboli, Dipesh
    Lan, Tian
    Aggarwal, Vaneet
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [37] Attention-Based Multi-Task Learning in Pharmacovigilance
    Zhang, Shinan
    Dev, Shantanu
    Voyles, Joseph
    Rao, Anand S.
    PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2018, : 2324 - 2328
  • [38] Multiple Relational Attention Network for Multi-task Learning
    Zhao, Jiejie
    Du, Bowen
    Sun, Leilei
    Zhuang, Fuzhen
    Lv, Weifeng
    Xiong, Hui
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1123 - 1131
  • [39] SEQUENTIAL CROSS ATTENTION BASED MULTI-TASK LEARNING
    Kim, Sunkyung
    Choi, Hyesong
    Min, Dongbo
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2311 - 2315
  • [40] End-to-End Multi-Task Learning with Attention
    Liu, Shikun
    Johns, Edward
    Davison, Andrew J.
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1871 - 1880