A syntactic approach to automatic lip feature extraction for speaker identification

被引:0
|
作者
Wark, T [1 ]
Sridharan, S [1 ]
机构
[1] Queensland Univ Technol, Signal Proc Res Ctr, Speech Res Lab, Brisbane, Qld 4001, Australia
关键词
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper presents a novel technique for the tracking and extraction of features from lips for the purpose of speaker identification. In noisy or other adverse conditions, identification performance via the speech signal call significantly reduce, hence additional information which can complement the speech signal is of particular interest. In our system, syntactic information is derived from chromatic information in the lip region. A model of the lip contour is formed directly from the syntactic information, with no minimization procedure required to refine estimates. Colour features a.re then extracted from the lips via profiles taken around the lip contour. Further improvement in lip features is obtained via linear discriminant analysis (LDA). Speaker models are built from the lip features based on the Gaussian Mixture Model (GMM). Identification experiments are performed on the M2VTS database [1], with encouraging results.
引用
收藏
页码:3693 / 3696
页数:4
相关论文
共 50 条
  • [1] An approach to statistical lip modelling for speaker identification via chromatic feature extraction
    Wark, T
    Sridharan, S
    Chandran, V
    [J]. FOURTEENTH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1 AND 2, 1998, : 123 - 125
  • [2] Automatic extraction of geometric lip features with application to multi-modal speaker identification
    Arsic, Ivana
    Vilagut, Roger
    Thiran, Jean-Philippe
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO - ICME 2006, VOLS 1-5, PROCEEDINGS, 2006, : 161 - +
  • [3] Automatic lip localization and feature extraction for lip-reading
    Werda, Salah
    Mahdi, Walid
    Ben Hamadou, Abdehnajid
    [J]. VISAPP 2007: PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOLUME IU/MTSV, 2007, : 268 - +
  • [4] A new approach to designing a feature extractor in speaker identification based on discriminative feature extraction
    Miyajima, C
    Watanabe, H
    Tokuda, K
    Kitamura, T
    Katagiri, S
    [J]. SPEECH COMMUNICATION, 2001, 35 (3-4) : 203 - 218
  • [5] Discriminative feature extraction applied to speaker identification
    Nealand, JH
    Bradley, AB
    Lech, M
    [J]. 2002 6TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, VOLS I AND II, 2002, : 484 - 487
  • [6] Speaker Identification System Based on Lip-Motion Feature
    Ma, Xinjun
    Wu, Chenchen
    Li, Yuanyuan
    Zhong, Qianyuan
    [J]. COMPUTER VISION SYSTEMS, ICVS 2017, 2017, 10528 : 289 - 299
  • [7] Automatic Speaker Recognition :An Approach using DWT based Feature Extraction and Vector Quantization
    Singhai, Jyoti
    Singhai, Rakesh
    [J]. IETE TECHNICAL REVIEW, 2007, 24 (05) : 395 - 402
  • [8] Lip feature extraction towards an automatic speechreading system
    Zhang, X
    Mersereau, RM
    [J]. 2000 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL III, PROCEEDINGS, 2000, : 226 - 229
  • [9] AN AUTOMATIC APPROACH TO FEATURE EXTRACTION
    Angioni, Manuela
    Tuveri, Franco
    [J]. ICAART: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 1, 2012, : 473 - 476
  • [10] PHYSIOLOGICALLY-MOTIVATED FEATURE EXTRACTION FOR SPEAKER IDENTIFICATION
    Wang, Jianglin
    Johnson, Michael T.
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,