Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild

被引:5
|
作者
He, Yibo [1 ]
Seng, Kah Phooi [1 ,2 ]
Ang, Li Minn [3 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch AI & Adv Comp, Suzhou 215123, Peoples R China
[2] Queensland Univ Technol, Sch Comp Sci, Brisbane, Qld 4000, Australia
[3] Univ Sunshine Coast, Sch Sci Technol & Engn, Sippy Downs, Qld 4502, Australia
关键词
multimodal sensing; audio-visual speech recognition; deep learning;
D O I
10.3390/s23041834
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term "in the wild" is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task that leverages both an audio input of a human voice and an aligned visual input of lip motions. However, since in-the-wild scenarios can include more noise, AVSR's performance is affected. Here, we propose new improvements for AVSR models by incorporating data-augmentation techniques to generate more data samples for building the classification models. For the data-augmentation techniques, we utilized a combination of conventional approaches (e.g., flips and rotations), as well as newer approaches, such as generative adversarial networks (GANs). To validate the approaches, we used augmented data from well-known datasets (LRS2-Lip Reading Sentences 2 and LRS3) in the training process and testing was performed using the original data. The study and experimental results indicated that the proposed AVSR model and framework, combined with the augmentation approach, enhanced the performance of the AVSR framework in the wild for noisy datasets. Furthermore, in this study, we discuss the domains of automatic speech recognition (ASR) architectures and audio-visual speech recognition (AVSR) architectures and give a concise summary of the AVSR models that have been proposed.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] Audio-visual speech recognition, one pass learning with spiking neurons
    Séguier, R
    Mercier, D
    ARTIFICIAL NEURAL NETWORKS - ICANN 2002, 2002, 2415 : 1207 - 1212
  • [42] An asynchronous DBN for audio-visual speech recognition
    Saenko, Kate
    Livescu, Karen
    2006 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, 2006, : 154 - +
  • [43] Audio-visual modeling for bimodal speech recognition
    Kaynak, MN
    Zhi, Q
    Cheok, AD
    Sengupta, K
    Chung, KC
    2001 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5: E-SYSTEMS AND E-MAN FOR CYBERNETICS IN CYBERSPACE, 2002, : 181 - 186
  • [44] Statistical multimodal integration for audio-visual speech processing
    Nakamura, S
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (04): : 854 - 866
  • [45] Bimodal fusion in audio-visual speech recognition
    Zhang, XZ
    Mersereau, RM
    Clements, M
    2002 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL I, PROCEEDINGS, 2002, : 964 - 967
  • [46] Audio-Visual Deep Clustering for Speech Separation
    Lu, Rui
    Duan, Zhiyao
    Zhang, Changshui
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (11) : 1697 - 1712
  • [47] Retraction Note: Detecting adversarial attacks on audio-visual speech recognition using deep learning method
    Rabie A. Ramadan
    International Journal of Speech Technology, 2022, 25 (Suppl 1) : 29 - 29
  • [48] RETRACTED ARTICLE: Detecting adversarial attacks on audio-visual speech recognition using deep learning method
    Rabie A. Ramadan
    International Journal of Speech Technology, 2022, 25 : 625 - 631
  • [49] Deep Audio-visual Learning: A Survey
    Hao Zhu
    Man-Di Luo
    Rui Wang
    Ai-Hua Zheng
    Ran He
    International Journal of Automation and Computing, 2021, 18 : 351 - 376
  • [50] Deep Audio-visual Learning: A Survey
    Hao Zhu
    Man-Di Luo
    Rui Wang
    Ai-Hua Zheng
    Ran He
    Machine Intelligence Research, 2021, 18 (03) : 351 - 376