Efficiency-oriented approaches for self-supervised speech representation learning

被引:0
|
作者
Lugo, Luis [1 ]
Vielzeuf, Valentin [1 ]
机构
[1] Orange, 4 Rue du Clos Courtel, Cesson-Sevigne, Brittany,35510, France
关键词
Adversarial machine learning - Contrastive Learning - Federated learning - Knowledge representation - Semi-supervised learning - Speech processing - Transfer learning;
D O I
10.1007/s10772-024-10121-9
中图分类号
学科分类号
摘要
Self-supervised learning enables the training of large neural models without the need for large, labeled datasets. It has been generating breakthroughs in several fields, including computer vision, natural language processing, biology, and speech. In particular, the state-of-the-art in several speech processing applications, such as automatic speech recognition or speaker identification, are models where the latent representation is learned using self-supervised approaches. Several configurations exist in self-supervised learning for speech, including contrastive, predictive, and multilingual approaches. There is, however, a crucial limitation in the majority of existing approaches: their high computational costs. These costs limit the deployment of models, the size of the training dataset, and the number of research groups that can afford research with large self-supervised models. Likewise, we should consider the environmental costs that high energy consumption implies. Efforts in this direction comprise optimization of existing models, neural architecture efficiency, improvements in finetuning for speech processing tasks, and data efficiency. But despite current efforts, more work could be done to address high computational costs in self-supervised representation learning. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:765 / 779
页数:14
相关论文
共 50 条
  • [31] Context Autoencoder for Self-supervised Representation Learning
    Xiaokang Chen
    Mingyu Ding
    Xiaodi Wang
    Ying Xin
    Shentong Mo
    Yunhao Wang
    Shumin Han
    Ping Luo
    Gang Zeng
    Jingdong Wang
    International Journal of Computer Vision, 2024, 132 : 208 - 223
  • [32] Self-supervised Representation Learning for Astronomical Images
    Hayat, Md Abul
    Stein, George
    Harrington, Peter
    Lukic, Zarija
    Mustafa, Mustafa
    ASTROPHYSICAL JOURNAL LETTERS, 2021, 911 (02)
  • [33] Self-supervised representation learning for trip recommendation
    Gao, Qiang
    Wang, Wei
    Zhang, Kunpeng
    Yang, Xin
    Miao, Congcong
    Li, Tianrui
    KNOWLEDGE-BASED SYSTEMS, 2022, 247
  • [34] SelfDoc: Self-Supervised Document Representation Learning
    Li, Peizhao
    Gu, Jiuxiang
    Kuen, Jason
    Morariu, Vlad, I
    Zhao, Handong
    Jain, Rajiv
    Manjunatha, Varun
    Liu, Hongfu
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5648 - 5656
  • [35] Solving Inefficiency of Self-supervised Representation Learning
    Wang, Guangrun
    Wang, Keze
    Wang, Guangcong
    Torr, Philip H. S.
    Lin, Liang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9485 - 9495
  • [36] Self-supervised Representation Learning on Dynamic Graphs
    Tian, Sheng
    Wu, Ruofan
    Shi, Leilei
    Zhu, Liang
    Xiong, Tao
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 1814 - 1823
  • [37] MusicBERT: A Self-supervised Learning of Music Representation
    Zhu, Hongyuan
    Niu, Ye
    Fu, Di
    Wang, Hao
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3955 - 3963
  • [38] Revisiting Self-Supervised Visual Representation Learning
    Kolesnikov, Alexander
    Zhai, Xiaohua
    Beyer, Lucas
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1920 - 1929
  • [39] Self-supervised speech representation learning based on positive sample comparison and masking reconstruction
    Zhang, Wenlin
    Liu, Xuepeng
    Niu, Tong
    Chen, Qi
    Qu, Dan
    Tongxin Xuebao/Journal on Communications, 2022, 43 (07): : 163 - 171
  • [40] Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    INTERSPEECH 2022, 2022, : 669 - 673