A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model

被引:1
|
作者
Radhakrishnan, Srijith [1 ,2 ,4 ]
Yang, Chao-Han Huck [1 ,3 ]
Khan, Sumeer Ahmad [1 ,4 ]
Kiani, Narsis A. [1 ]
Gomez-Cabrero, David [1 ]
Tegner, Jesper N. [1 ,4 ]
机构
[1] King Abdullah Univ Sci & Technol, Thuwal, Saudi Arabia
[2] Manipal Inst Technol, Manipal, India
[3] Georgia Inst Technol, Atlanta, GA USA
[4] SDAIA KAUST Ctr Excellence Data Sci & Artificial, Thuwal 23952, Saudi Arabia
来源
关键词
Parameter-Efficient Learning; Dialect Identification; Arabic Dialect;
D O I
10.21437/Interspeech.2023-1407
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this work, we explore Parameter-Efficient-Learning (PEL) techniques to repurpose a General-Purpose-Speech (GSM) model for Arabic dialect identification (ADI). Specifically, we investigate different setups to incorporate trainable features into a multi-layer encoder-decoder GSM formulation under frozen pre-trained settings. Our architecture includes residual adapter and model reprogramming (input-prompting). We design a token-level label mapping to condition the GSM for Arabic Dialect Identification (ADI). We achieve new state-of-the-art accuracy on the ADI-17 dataset by vanilla fine-tuning. We further reduce the training budgets with the PEL method, which performs within 1.86% accuracy to fine-tuning using only 2.5% of (extra) network trainable parameters. Our study demonstrates how to identify Arabic dialects using a small dataset and limited computation with open source code at https://github.com/Srijith-rkr/KAUST-Whisper-Adapter
引用
收藏
页码:1958 / 1962
页数:5
相关论文
共 50 条
  • [21] Parameter-Efficient Multi-classification Software Defect Detection Method Based on Pre-trained LLMs
    Wang, Xuanye
    Lu, Lu
    Yang, Zhanyu
    Tian, Qingyan
    Lin, Haisha
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [22] Parameter-Efficient Fine-Tuning of Pre-trained Large Language Models for Financial Text Analysis
    Langa, Kelly
    Wang, Hairong
    Okuboyejo, Olaperi
    ARTIFICIAL INTELLIGENCE RESEARCH, SACAIR 2024, 2025, 2326 : 3 - 20
  • [23] Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models
    Lawton, Neal
    Kumar, Anoop
    Thattai, Govind
    Galstyan, Aram
    Ver Steeg, Greg
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8506 - 8515
  • [24] Parameter-Efficient Domain Knowledge Integration from Multiple Sources for Biomedical Pre-trained Language Models
    Lu, Qiuhao
    Dou, Dejing
    Thien Huu Nguyen
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 3855 - 3865
  • [25] Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models
    Tang, Yiwen
    Zhang, Ray
    Guo, Zoey
    Ma, Xianzheng
    Zhao, Bin
    Wang, Zhigang
    Wang, Dong
    Li, Xuelong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5171 - 5179
  • [26] Parameter-efficient fine-tuning of pre-trained code models for just-in-time defect prediction
    Abu Talib M.
    Bou Nassif A.
    Azzeh M.
    Alesh Y.
    Afadar Y.
    Neural Computing and Applications, 36 (27) : 16911 - 16940
  • [27] PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models
    Feng, Tiantian
    Narayanan, Shrikanth
    2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, ACII, 2023,
  • [28] Adapting Large-Scale Pre-trained Models for Uni ed Dialect Speech Recognition Model
    Toyama, T.
    Kai, A.
    Kamiya, Y.
    Takahashi, N.
    Acta Physica Polonica A, 2024, 146 (04) : 413 - 418
  • [29] MAPL : Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
    Manas, Oscar
    Rodriguez, Pau
    Ahmadi, Saba
    Nematzadeh, Aida
    Goyal, Yash
    Agrawal, Aishwarya
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2523 - 2548
  • [30] Adapter Learning from Pre-trained Model for Robust Spoof Speech Detection
    Wu, Haochen
    Guo, Wu
    Peng, Shengyu
    Li, Zhuhai
    Zhang, Jie
    INTERSPEECH 2024, 2024, : 2095 - 2099