Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

被引:55
|
作者
Saeed, Aaqib [1 ]
Salim, Flora D. [2 ,3 ]
Ozcelebi, Tanir [1 ]
Lukkien, Johan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5612 AE Eindhoven, Netherlands
[2] RMIT Univ, Sch Sci, Melbourne, Vic 3001, Australia
[3] RMIT Univ, RMIT Ctr Informat Discovery & Data Analyt, Melbourne, Vic 3001, Australia
关键词
Brain modeling; Task analysis; Data models; Internet of Things; Wavelet transforms; Sleep; Deep learning; embedded intelligence; federated learning; learning representations; low-data regime; self-supervised learning; sensor analytics;
D O I
10.1109/JIOT.2020.3009358
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smartphones, wearables, and Internet-of-Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed <italic>scalogram-signal correspondence learning</italic> based on wavelet transform (WT) to learn useful representations from unlabeled sensor inputs as electroencephalography, blood volume pulse, accelerometer, and WiFi channel-state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary view (i.e., a scalogram generated with WT) align with each other, by optimizing a contrastive objective. We extensively assess the quality of learned features with our multiview strategy on diverse public data sets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully supervised networks and it works significantly better than pretraining with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semisupervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.
引用
收藏
页码:1030 / 1040
页数:11
相关论文
共 50 条
  • [41] Self-Supervised Learning of Audio Representations From Permutations With Differentiable Ranking
    Carr, Andrew N.
    Berthet, Quentin
    Blondel, Mathieu
    Teboul, Olivier
    Zeghidour, Neil
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 708 - 712
  • [42] Repeat and learn: Self-supervised visual representations learning by Scene Localization
    Altabrawee, Hussein
    Noor, Mohd Halim Mohd
    PATTERN RECOGNITION, 2024, 156
  • [43] Learning Self-supervised Audio-Visual Representations for Sound Recommendations
    Krishnamurthy, Sudha
    ADVANCES IN VISUAL COMPUTING (ISVC 2021), PT II, 2021, 13018 : 124 - 138
  • [44] Negative sampling strategies for contrastive self-supervised learning of graph representations
    Hafidi, Hakim
    Ghogho, Mounir
    Ciblat, Philippe
    Swami, Ananthram
    SIGNAL PROCESSING, 2022, 190
  • [45] CASTing Your Model: Learning to Localize Improves Self-Supervised Representations
    Selvaraju, Ramprasaath R.
    Desai, Karan
    Johnson, Justin
    Naik, Nikhil
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 11053 - 11062
  • [46] Federated Self-Supervised Learning Based on Prototypes Clustering Contrastive Learning for Internet of Vehicles Applications
    Dai, Cheng
    Wei, Shuai
    Dai, Shengxin
    Garg, Sahil
    Kaddoum, Georges
    Shamim Hossain, M.
    IEEE Internet of Things Journal, 2025, 12 (05) : 4692 - 4700
  • [47] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867
  • [48] Learning Representations for Bipartite Graphs Using Multi-task Self-supervised Learning
    Sethi, Akshay
    Gupta, Sonia
    Malhotra, Aakarsh
    Asthana, Siddhartha
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT III, 2023, 14171 : 19 - 35
  • [49] Self-supervised learning model
    Saga, Kazushie
    Sugasaka, Tamami
    Sekiguchi, Minoru
    Fujitsu Scientific and Technical Journal, 1993, 29 (03): : 209 - 216
  • [50] Longitudinal self-supervised learning
    Zhao, Qingyu
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    MEDICAL IMAGE ANALYSIS, 2021, 71