Facial deformations for MPEG-4

被引:19
|
作者
Escher, M [1 ]
Pandzic, I [1 ]
Thalmann, NM [1 ]
机构
[1] Univ Geneva, MIRALab, CUI, CH-1211 Geneva, Switzerland
关键词
MPEG-4; SNHC; facial animation; face modelling;
D O I
10.1109/CA.1998.681908
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The new MPEG-4 standard, scheduled to become an International Standard in February 1999 will include support nor only for natural video and audio, but also for synthetic graphics and sounds. In particular representation of human faces and bodies will be supported. In the current draft specification of the standard [MPEG-N1901, MPEG-N1902] Facial Animation Parameters (FAPs) and Facial Definition Parameters (FDPs) are defined. FAPs are used to control facial animation at extremely low bitrates (approx. 2 kbit/sec). FDPs are used to define the shape of the face by deforming a generic facial model, or by supplying a substitute model. We present algorithms to interpret the part of FDPs dealing with the deformation of a generic facial model, leading to a personalisation of the model. The implementation starts from a generic model, which is deformed in order to fit the input parameters. The input parameters must include the facial feature points, and may optionally include texture coordinates, and a calibration face model. We apply a cylindrical projection to the generic face in order to interpolate any missing feature points and to fit the texture, if supplied. Then we use a Dirichlet Free Form Deformation [Morcozet 97] interpolation method to deform the generic head according to the set of feature points. If the calibration face model is present the fitting method is based on cylindrical projections matching and barycentric coordinates to interpolate the non-feature points.
引用
收藏
页码:56 / 62
页数:7
相关论文
共 50 条
  • [1] MPEG-4 facial expression synthesis
    Malatesta, L.
    Raouzaiou, A.
    Karpouzis, K.
    Kollias, S.
    [J]. PERSONAL AND UBIQUITOUS COMPUTING, 2009, 13 (01) : 77 - 83
  • [2] MPEG-4 facial expression synthesis
    L. Malatesta
    A. Raouzaiou
    K. Karpouzis
    S. Kollias
    [J]. Personal and Ubiquitous Computing, 2009, 13 : 77 - 83
  • [3] A NURBS facial model based on MPEG-4
    Xiao, Boxiang
    Zhang, Qiang
    Wei, Xiaopeng
    [J]. ICAT 2006: 16TH INTERNATIONAL CONFERENCE ON ARTIFICIAL REALITY AND TELEXISTENCE - WORSHOPS, PROCEEDINGS, 2006, : 491 - +
  • [4] Lip tracking for MPEG-4 facial animation
    Wu, ZL
    Aleksic, PS
    Katsaggelos, AK
    [J]. FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS, 2002, : 293 - 298
  • [5] Optimizing facial animation parameters for MPEG-4
    Hovden, G
    Ling, N
    [J]. IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2003, 49 (04) : 1354 - 1359
  • [6] Fast Facial Motion Cloning in MPEG-4
    Fratarcangeli, M
    Schaerf, M
    [J]. ISPA 2005: PROCEEDINGS OF THE 4TH INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS, 2005, : 310 - 315
  • [7] Planar bones for MPEG-4 facial animation
    Lorenzo, MAS
    Maddock, SC
    [J]. THEORY AND PRACTICE OF COMPUTER GRAPHICS, PROCEEDINGS, 2003, : 81 - 88
  • [8] Parameterized facial expression synthesis based on MPEG-4
    Raouzaiou, A
    Tsapatsoulis, N
    Karpouzis, K
    Kollias, S
    [J]. EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING, 2002, 2002 (10) : 1021 - 1038
  • [9] An MPEG-4 facial animation parameters generation system
    Hovden, G
    Ling, N
    [J]. SIPS 2003: IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS: DESIGN AND IMPLEMENTATION, 2003, : 171 - 176
  • [10] MPEG-4 facial animation in video analysis and synthesis
    Eisert, P
    [J]. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2003, 13 (05) : 245 - 256