Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

被引:55
|
作者
Saeed, Aaqib [1 ]
Salim, Flora D. [2 ,3 ]
Ozcelebi, Tanir [1 ]
Lukkien, Johan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5612 AE Eindhoven, Netherlands
[2] RMIT Univ, Sch Sci, Melbourne, Vic 3001, Australia
[3] RMIT Univ, RMIT Ctr Informat Discovery & Data Analyt, Melbourne, Vic 3001, Australia
关键词
Brain modeling; Task analysis; Data models; Internet of Things; Wavelet transforms; Sleep; Deep learning; embedded intelligence; federated learning; learning representations; low-data regime; self-supervised learning; sensor analytics;
D O I
10.1109/JIOT.2020.3009358
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smartphones, wearables, and Internet-of-Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed <italic>scalogram-signal correspondence learning</italic> based on wavelet transform (WT) to learn useful representations from unlabeled sensor inputs as electroencephalography, blood volume pulse, accelerometer, and WiFi channel-state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary view (i.e., a scalogram generated with WT) align with each other, by optimizing a contrastive objective. We extensively assess the quality of learned features with our multiview strategy on diverse public data sets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully supervised networks and it works significantly better than pretraining with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semisupervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.
引用
收藏
页码:1030 / 1040
页数:11
相关论文
共 50 条
  • [31] A framework for self-supervised federated domain adaptation
    Bin Wang
    Gang Li
    Chao Wu
    WeiShan Zhang
    Jiehan Zhou
    Ye Wei
    EURASIP Journal on Wireless Communications and Networking, 2022
  • [32] Self-supervised graph representations of WSIs
    Pina, Oscar
    Vilaplana, Veronica
    GEOMETRIC DEEP LEARNING IN MEDICAL IMAGE ANALYSIS, VOL 194, 2022, 194 : 107 - 117
  • [33] A study of the generalizability of self-supervised representations
    Tendle, Atharva
    Hasan, Mohammad Rashedul
    MACHINE LEARNING WITH APPLICATIONS, 2021, 6
  • [34] A framework for self-supervised federated domain adaptation
    Wang, Bin
    Li, Gang
    Wu, Chao
    Zhang, WeiShan
    Zhou, Jiehan
    Wei, Ye
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2022, 2022 (01)
  • [35] Maximizing model generalization for machine condition monitoring with Self-Supervised Learning and Federated Learning
    Russell, Matthew
    Wang, Peng
    JOURNAL OF MANUFACTURING SYSTEMS, 2023, 71 : 274 - 285
  • [36] Federated Self-Supervised Learning in Heterogeneous Settings: Limits of a Baseline Approach on HAR
    Sannara, E. K.
    Rombourg, Romain
    Portet, Francois
    Lalanda, Philippe
    2022 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS (PERCOM WORKSHOPS), 2022,
  • [37] Learning self-supervised molecular representations for drug–drug interaction prediction
    Rogia Kpanou
    Patrick Dallaire
    Elsa Rousseau
    Jacques Corbeil
    BMC Bioinformatics, 25
  • [38] TabFedSL: A Self-Supervised Approach to Labeling Tabular Data in Federated Learning Environments
    Wang, Ruixiao
    Hu, Yanxin
    Chen, Zhiyu
    Guo, Jianwei
    Liu, Gang
    MATHEMATICS, 2024, 12 (08)
  • [39] BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping
    Elbanna, Gasser
    Scheidwasser-Clow, Neil
    Kegler, Mikolaj
    Beckmann, Pierre
    El Hajal, Karl
    Cernak, Milos
    HEAR: HOLISTIC EVALUATION OF AUDIO REPRESENTATIONS, VOL 166, 2021, 166 : 25 - 47
  • [40] Visual Reinforcement Learning With Self-Supervised 3D Representations
    Ze, Yanjie
    Hansen, Nicklas
    Chen, Yinbo
    Jain, Mohit
    Wang, Xiaolong
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (05) : 2890 - 2897