Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild

被引:5
|
作者
He, Yibo [1 ]
Seng, Kah Phooi [1 ,2 ]
Ang, Li Minn [3 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch AI & Adv Comp, Suzhou 215123, Peoples R China
[2] Queensland Univ Technol, Sch Comp Sci, Brisbane, Qld 4000, Australia
[3] Univ Sunshine Coast, Sch Sci Technol & Engn, Sippy Downs, Qld 4502, Australia
关键词
multimodal sensing; audio-visual speech recognition; deep learning;
D O I
10.3390/s23041834
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term "in the wild" is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task that leverages both an audio input of a human voice and an aligned visual input of lip motions. However, since in-the-wild scenarios can include more noise, AVSR's performance is affected. Here, we propose new improvements for AVSR models by incorporating data-augmentation techniques to generate more data samples for building the classification models. For the data-augmentation techniques, we utilized a combination of conventional approaches (e.g., flips and rotations), as well as newer approaches, such as generative adversarial networks (GANs). To validate the approaches, we used augmented data from well-known datasets (LRS2-Lip Reading Sentences 2 and LRS3) in the training process and testing was performed using the original data. The study and experimental results indicated that the proposed AVSR model and framework, combined with the augmentation approach, enhanced the performance of the AVSR framework in the wild for noisy datasets. Furthermore, in this study, we discuss the domains of automatic speech recognition (ASR) architectures and audio-visual speech recognition (AVSR) architectures and give a concise summary of the AVSR models that have been proposed.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Auxiliary Loss Multimodal GRU Model in Audio-Visual Speech Recognition
    Yuan, Yuan
    Tian, Chunlin
    Lu, Xiaoqiang
    IEEE ACCESS, 2018, 6 : 5573 - 5583
  • [22] Speech enhancement and recognition in meetings with an audio-visual sensor array
    Maganti, Hari Krishna
    Gatica-Perez, Daniel
    McCowan, Iain
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2007, 15 (08): : 2257 - 2269
  • [23] Multimodal Corpus Design for Audio-Visual Speech Recognition in Vehicle Cabin
    Kashevnik, Alexey
    Lashkov, Igor
    Axyonov, Alexandr
    Ivanko, Denis
    Ryumin, Dmitry
    Kolchin, Artem
    Karpov, Alexey
    IEEE ACCESS, 2021, 9 : 34986 - 35003
  • [24] Multimodal Integration for Large-Vocabulary Audio-Visual Speech Recognition
    Yu, Wentao
    Zeiler, Steffen
    Kolossa, Dorothea
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 341 - 345
  • [25] An audio-visual speech recognition with a new mandarin audio-visual database
    Liao, Wen-Yuan
    Pao, Tsang-Long
    Chen, Yu-Te
    Chang, Tsun-Wei
    INT CONF ON CYBERNETICS AND INFORMATION TECHNOLOGIES, SYSTEMS AND APPLICATIONS/INT CONF ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, 2007, : 19 - +
  • [26] Integration of Deep Bottleneck Features for Audio-Visual Speech Recognition
    Ninomiya, Hiroshi
    Kitaoka, Norihide
    Tamura, Satoshi
    Iribe, Yurie
    Takeda, Kazuya
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 563 - 567
  • [27] MULTIPOSE AUDIO-VISUAL SPEECH RECOGNITION
    Estellers, Virginia
    Thiran, Jean-Philippe
    19TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO-2011), 2011, : 1065 - 1069
  • [28] Audio-visual integration for speech recognition
    Kober, R
    Harz, U
    NEUROLOGY PSYCHIATRY AND BRAIN RESEARCH, 1996, 4 (04) : 179 - 184
  • [29] Audio-visual speech recognition by speechreading
    Zhang, XZ
    Mersereau, RM
    Clements, MA
    DSP 2002: 14TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING PROCEEDINGS, VOLS 1 AND 2, 2002, : 1069 - 1072
  • [30] Transfer Learning from Audio-Visual Grounding to Speech Recognition
    Hsu, Wei-Ning
    Harwath, David
    Glass, James
    INTERSPEECH 2019, 2019, : 3242 - 3246