What Makes Good Contrastive Learning on Small-Scale Wearable-based Tasks?

被引:20
|
作者
Qian, Hangwei [1 ]
Tian, Tian [1 ]
Miao, Chunyan [1 ]
机构
[1] Nanyang Technol Univ, Singapore, Singapore
基金
新加坡国家研究基金会;
关键词
contrastive learning; human activity recognition; wearable sensors; open-source library; empirical investigations;
D O I
10.1145/3534678.3539134
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-supervised learning establishes a new paradigm of learning representations with much fewer or even no label annotations. Recently there has been remarkable progress on large-scale contrastive learning models which require substantial computing resources, yet such models are not practically optimal for small-scale tasks. To fill the gap, we aim to study contrastive learning on the wearable-based activity recognition task. Specifically, we conduct an in-depth study of contrastive learning from both algorithmic-level and task-level perspectives. For algorithmic-level analysis, we decompose contrastive models into several key components and conduct rigorous experimental evaluations to better understand the efficacy and rationale behind contrastive learning. More importantly, for task-level analysis, we show that the wearable-based signals bring unique challenges and opportunities to existing contrastive models, which cannot be readily solved by existing algorithms. Our thorough empirical studies suggest important practices and shed light on future research challenges. In the meantime, this paper presents an open-source PyTorch library CL-HAR, which can serve as a practical tool for researchers1. The library is highly modularized and easy to use, which opens up avenues for exploring novel contrastive models quickly in the future.
引用
收藏
页码:3761 / 3771
页数:11
相关论文
共 50 条
  • [21] Deep Learning Empowered Wearable-Based Behavior Recognition for Search and Rescue Dogs
    Kasnesis, Panagiotis
    Doulgerakis, Vasileios
    Uzunidis, Dimitris
    Kogias, Dimitris G.
    Funcia, Susana I.
    Gonzalez, Marta B.
    Giannousis, Christos
    Patrikakis, Charalampos Z.
    SENSORS, 2022, 22 (03)
  • [22] What makes small beautiful? Learning and development in small firms
    Csillag, Sara
    Csizmadia, Peter
    Hidegh, Anna Laura
    Szaszvari, Karina
    HUMAN RESOURCE DEVELOPMENT INTERNATIONAL, 2019, 22 (05) : 453 - 476
  • [23] Frames of reference in small-scale spatial tasks in wild bumblebees
    Gema Martin-Ordas
    Scientific Reports, 12
  • [24] What makes for uniformity for non-contrastive self-supervised learning?
    YinQuan Wang
    XiaoPeng Zhang
    Qi Tian
    JinHu Lü
    Science China Technological Sciences, 2022, 65 : 2399 - 2408
  • [26] Interaction in online postgraduate learning: what makes a good forum?
    Kipling, Richard P.
    Stiles, William A. V.
    de Andrade-Lima, Micael
    MacKintosh, Neil
    Roberts, Meirion W.
    Williams, Cate L.
    Wootton-Beard, Peter C.
    Watson-Jones, Sarah J.
    DISTANCE EDUCATION, 2023, 44 (01) : 162 - 189
  • [27] What makes for uniformity for non-contrastive self-supervised learning?
    WANG YinQuan
    ZHANG XiaoPeng
    TIAN Qi
    Lü JinHu
    Science China(Technological Sciences), 2022, 65 (10) : 2399 - 2408
  • [28] What makes for uniformity for non-contrastive self-supervised learning?
    WANG YinQuan
    ZHANG XiaoPeng
    TIAN Qi
    L JinHu
    Science China(Technological Sciences), 2022, (10) : 2399 - 2408
  • [29] What makes for uniformity for non-contrastive self-supervised learning?
    Wang YinQuan
    Zhang XiaoPeng
    Tian Qi
    Lu JinHu
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2022, 65 (10) : 2399 - 2408
  • [30] What Makes a Good Order of Examples in In-Context Learning
    Guo, Qi
    Wang, Leiyu
    Wang, Yidong
    Ye, Wei
    Zhang, Shikun
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 14892 - 14904