On the Locality of Attention in Direct Speech Translation

被引:0
|
作者
Alastruey, Belen [1 ]
Ferrando, Javier [1 ]
Gallego, Gerard, I [1 ]
Costa-jussa, Marta R. [1 ]
机构
[1] Univ Politecn Cataluna, TALP Res Ctr, Barcelona, Spain
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformers have achieved state-of-the-art results across multiple NLP tasks. However, the self-attention mechanism complexity scales quadratically with the sequence length, creating an obstacle for tasks involving long sequences, like in the speech domain. In this paper, we discuss the usefulness of self-attention for Direct Speech Translation. First, we analyze the layer-wise token contributions in the self-attention of the encoder, unveiling local diagonal patterns. To prove that some attention weights are avoidable, we propose to substitute the standard self-attention with a local efficient one, setting the amount of context used based on the results of the analysis. With this approach, our model matches the baseline performance, and improves the efficiency by skipping the computation of those weights that standard attention discards.
引用
收藏
页码:402 / 412
页数:11
相关论文
共 50 条
  • [1] Attention as a Guide for Simultaneous Speech Translation
    Papi, Sara
    Negri, Matteo
    Turchi, Marco
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 13340 - 13356
  • [2] A Faster Approach For Direct Speech to Speech Translation
    Shankarappa, Rashmi T.
    Tiwari, Sourabh
    [J]. 2022 IEEE WOMEN IN TECHNOLOGY CONFERENCE (WINTECHCON): SMARTER TECHNOLOGIES FOR A SUSTAINABLE AND HYPER-CONNECTED WORLD, 2022,
  • [3] Direct Speech-to-Speech Translation With Discrete Units
    Lee, Ann
    Chen, Peng-Jen
    Wang, Changhan
    Gu, Jiatao
    Popuri, Sravya
    Ma, Xutai
    Polyak, Adam
    Adi, Yossi
    He, Qing
    Tang, Yun
    Pino, Juan
    Hsu, Wei-Ning
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3327 - 3339
  • [4] Direct Speech-to-Image Translation
    Li, Jiguo
    Zhang, Xinfeng
    Jia, Chuanmin
    Xu, Jizheng
    Zhang, Li
    Wang, Yue
    Ma, Siwei
    Gao, Wen
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2020, 14 (03) : 517 - 529
  • [5] Direct Speech Translation for Automatic Subtitling
    Papi, Sara
    Gaido, Marco
    Karakanta, Alina
    Cettolo, Mauro
    Negri, Matteo
    Turchi, Marco
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2023, 11 : 1355 - 1376
  • [6] Cascade or Direct Speech Translation? A Case Study
    Etchegoyhen, Thierry
    Arzelus, Haritz
    Gete, Harritxu
    Alvarez, Aitor
    Torre, Ivan G.
    Martin-Donas, Juan Manuel
    Gonzalez-Docasal, Ander
    Fernandez, Edson Benites
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [7] Direct Segmentation Models for Streaming Speech Translation
    Iranzo-Sanchez, Javier
    Pastor, Adria Gimenez
    Silvestre-Cerda, Joan Albert
    Baquero-Arnal, Pau
    Civera, Jorge
    Juan, Alfons
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2599 - 2611
  • [8] TRANSFORMER-BASED DIRECT SPEECH-TO-SPEECH TRANSLATION WITH TRANSCODER
    Kano, Takatomo
    Sakti, Sakriani
    Nakamura, Satoshi
    [J]. 2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 958 - 965
  • [9] Direct speech-to-speech translation with a sequence-to-sequence model
    Jia, Ye
    Weiss, Ron J.
    Biadsy, Fadi
    Macherey, Wolfgang
    Johnson, Melvin
    Chen, Zhifeng
    Wu, Yonghui
    [J]. INTERSPEECH 2019, 2019, : 1123 - 1127
  • [10] Direct Vs Cascaded Speech-to-Speech Translation Using Transformer
    Arya, Lalaram
    Chowdhury, Amartya Roy
    Prasanna, S. R. Mahadeva
    [J]. SPEECH AND COMPUTER, SPECOM 2023, PT II, 2023, 14339 : 258 - 270