Federated Self-Supervised Learning of Multisensor Representations for Embedded Intelligence

被引:55
|
作者
Saeed, Aaqib [1 ]
Salim, Flora D. [2 ,3 ]
Ozcelebi, Tanir [1 ]
Lukkien, Johan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Math & Comp Sci, NL-5612 AE Eindhoven, Netherlands
[2] RMIT Univ, Sch Sci, Melbourne, Vic 3001, Australia
[3] RMIT Univ, RMIT Ctr Informat Discovery & Data Analyt, Melbourne, Vic 3001, Australia
关键词
Brain modeling; Task analysis; Data models; Internet of Things; Wavelet transforms; Sleep; Deep learning; embedded intelligence; federated learning; learning representations; low-data regime; self-supervised learning; sensor analytics;
D O I
10.1109/JIOT.2020.3009358
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Smartphones, wearables, and Internet-of-Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models due to privacy, bandwidth limitations, and the prohibitive cost of annotations. Federated learning provides a compelling framework for learning models from decentralized data, but conventionally, it assumes the availability of labeled samples, whereas on-device data are generally either unlabeled or cannot be annotated readily through user interaction. To address these issues, we propose a self-supervised approach termed <italic>scalogram-signal correspondence learning</italic> based on wavelet transform (WT) to learn useful representations from unlabeled sensor inputs as electroencephalography, blood volume pulse, accelerometer, and WiFi channel-state information. Our auxiliary task requires a deep temporal neural network to determine if a given pair of a signal and its complementary view (i.e., a scalogram generated with WT) align with each other, by optimizing a contrastive objective. We extensively assess the quality of learned features with our multiview strategy on diverse public data sets, achieving strong performance in all domains. We demonstrate the effectiveness of representations learned from an unlabeled input collection on downstream tasks with training a linear classifier over pretrained network, usefulness in low-data regime, transfer learning, and cross-validation. Our methodology achieves competitive performance with fully supervised networks and it works significantly better than pretraining with autoencoders in both central and federated contexts. Notably, it improves the generalization in a semisupervised setting as it reduces the volume of labeled data required through leveraging self-supervised learning.
引用
收藏
页码:1030 / 1040
页数:11
相关论文
共 50 条
  • [21] Efficient Self-Supervised Learning Representations for Spoken Language Identification
    Liu, Hexin
    Perera, Leibny Paola Garcia
    Khong, Andy W. H.
    Chng, Eng Siong
    Styles, Suzy J.
    Khudanpur, Sanjeev
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1296 - 1307
  • [22] InferCode: Self-Supervised Learning of Code Representations by Predicting Subtrees
    Bui, Nghi D. Q.
    Yu, Yijun
    Jiang, Lingxiao
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021), 2021, : 1186 - 1197
  • [23] Self-Supervised Visual Representations Learning by Contrastive Mask Prediction
    Zhao, Yucheng
    Wang, Guangting
    Luo, Chong
    Zeng, Wenjun
    Zha, Zheng-Jun
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10140 - 10149
  • [24] Federated Graph Anomaly Detection via Contrastive Self-Supervised Learning
    Kong, Xiangjie
    Zhang, Wenyi
    Wang, Hui
    Hou, Mingliang
    Chen, Xin
    Yan, Xiaoran
    Das, Sajal K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [25] Continually Learning Self-Supervised Representations with Projected Functional Regularization
    Gomez-Villa, Alex
    Twardowski, Bartlomiej
    Yu, Lu
    Bagdanov, Andrew D.
    van de Weijer, Joost
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3866 - 3876
  • [26] Self-Supervised Learning of Face Representations for Video Face Clustering
    Sharma, Vivek
    Tapaswi, Makarand
    Sarfraz, M. Saquib
    Stiefelhagen, Rainer
    2019 14TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2019), 2019, : 360 - 367
  • [27] Align Representations with Base: A New Approach to Self-Supervised Learning
    Zhang, Shaofeng
    Qiu, Lyn
    Zhu, Feng
    Yan, Junchi
    Zhang, Hengrui
    Zhao, Rui
    Li, Hongyang
    Yang, Xiaokang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16579 - 16588
  • [28] Towards Efficient and Effective Self-supervised Learning of Visual Representations
    Addepalli, Sravanti
    Bhogale, Kaushal
    Dey, Priyam
    Babu, R. Venkatesh
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 523 - 538
  • [29] Self-Supervised On-Device Federated Learning From Unlabeled Streams
    Shi, Jiahe
    Wu, Yawen
    Zeng, Dewen
    Tao, Jun
    Hu, Jingtong
    Shi, Yiyu
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (12) : 4871 - 4882
  • [30] FedLID: Self-Supervised Federated Learning for Leveraging Limited Image Data
    Psaltis, Athanasios
    Kastellos, Anestis
    Patrikakis, Charalampos Z.
    Daras, Petros
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 1031 - 1040