共 50 条
- [1] Self-Attention Transducers for End-to-End Speech Recognition [J]. INTERSPEECH 2019, 2019, : 4395 - 4399
- [2] TRANSFORMER-BASED END-TO-END SPEECH RECOGNITION WITH LOCAL DENSE SYNTHESIZER ATTENTION [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5899 - 5903
- [3] TRANSFORMER-BASED ONLINE CTC/ATTENTION END-TO-END SPEECH RECOGNITION ARCHITECTURE [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6084 - 6088
- [5] Very Deep Self-Attention Networks for End-to-End Speech Recognition [J]. INTERSPEECH 2019, 2019, : 66 - 70
- [6] An End-to-End Transformer-Based Automatic Speech Recognition for Qur?an Reciters [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 74 (02): : 3471 - 3487
- [7] Transformer-based Long-context End-to-end Speech Recognition [J]. INTERSPEECH 2020, 2020, : 5011 - 5015
- [8] On-device Streaming Transformer-based End-to-End Speech Recognition [J]. INTERSPEECH 2021, 2021, : 967 - 968
- [9] An Investigation of Positional Encoding in Transformer-based End-to-end Speech Recognition [J]. 2021 12TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2021,
- [10] On the localness modeling for the self-attention based end-to-end speech synthesis [J]. Neural Networks, 2020, 125 : 121 - 130