Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

被引:55
|
作者
Saeed, Aaqib [1 ]
Salim, Flora D. [2 ,3 ]
Ozcelebi, Tanir [1 ]
Lukkien, Johan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5612 AE Eindhoven, Netherlands
[2] RMIT Univ, Sch Sci, Melbourne, Vic 3001, Australia
[3] RMIT Univ, RMIT Ctr Informat Discovery & Data Analyt, Melbourne, Vic 3001, Australia
关键词
Brain modeling; Task analysis; Data models; Internet of Things; Wavelet transforms; Sleep; Deep learning; embedded intelligence; federated learning; learning representations; low-data regime; self-supervised learning; sensor analytics;
D O I
10.1109/JIOT.2020.3009358
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smartphones, wearables, and Internet-of-Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed <italic>scalogram-signal correspondence learning</italic> based on wavelet transform (WT) to learn useful representations from unlabeled sensor inputs as electroencephalography, blood volume pulse, accelerometer, and WiFi channel-state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary view (i.e., a scalogram generated with WT) align with each other, by optimizing a contrastive objective. We extensively assess the quality of learned features with our multiview strategy on diverse public data sets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully supervised networks and it works significantly better than pretraining with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semisupervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.
引用
下载
收藏
页码:1030 / 1040
页数:11
相关论文
共 50 条
  • [1] Federated Self-supervised Speech Representations: Are We There Yet?
    Gao, Yan
    Fernandez-Marques, Javier
    Parcollet, Titouan
    Mehrotra, Abhinav
    Lane, Nicholas D.
    INTERSPEECH 2022, 2022, : 3809 - 3813
  • [2] Federated Self-supervised Learning for Video Understanding
    Rehman, Yasar Abbas Ur
    Gao, Yan
    Shen, Jiajun
    de Gusmao, Pedro Porto Buarque
    Lane, Nicholas
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 506 - 522
  • [3] Self-Supervised Learning of Smart Contract Representations
    Yang, Shouliang
    Gu, Xiaodong
    Shen, Beijun
    30TH IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2022), 2022, : 82 - 93
  • [4] FEDERATED SELF-SUPERVISED LEARNING FOR ACOUSTIC EVENT CLASSIFICATION
    Feng, Meng
    Kao, Chieh-Chi
    Tang, Qingming
    Sun, Ming
    Rozgic, Viktor
    Matsoukas, Spyros
    Wang, Chao
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 481 - 485
  • [5] A Deep Cut Into Split Federated Self-Supervised Learning
    Przewiezlikowski, Marcin
    Osial, Marcin
    Zielinski, Bartosz
    Smieja, Marek
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT II, ECML PKDD 2024, 2024, 14942 : 444 - 459
  • [6] Self-Supervised Federated Learning for Fast MR Imaging
    Zou, Juan
    Pei, Tingrui
    Li, Cheng
    Wu, Ruoyou
    Wang, Shanshan
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 (1-11) : 1 - 11
  • [7] Self-Supervised Multisensor Change Detection
    Saha, Sudipan
    Ebel, Patrick
    Zhu, Xiao Xiang
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [8] Contrast and Order Representations for Video Self-supervised Learning
    Hu, Kai
    Shao, Jie
    Liu, Yuan
    Raj, Bhiksha
    Savvides, Marios
    Shen, Zhiqiang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7919 - 7929
  • [9] Self-supervised graph representations with generative adversarial learning
    Sun, Xuecheng
    Wang, Zonghui
    Lu, Zheming
    Lu, Ziqian
    NEUROCOMPUTING, 2024, 592
  • [10] Learning Action Representations for Self-supervised Visual Exploration
    Oh, Changjae
    Cavallaro, Andrea
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 5873 - 5879