Expressive facial gestures from motion capture data

被引:10
|
作者
Ju, Eunjung [1 ]
Lee, Jehee [1 ]
机构
[1] Seoul Natl Univ, Seoul 151, South Korea
关键词
D O I
10.1111/j.1467-8659.2008.01135.x
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Human facial gestures often exhibit such natural stochastic variations as how often the eyes blink, how often the eyebrows and the nose twitch, and how the head moves while speaking. The stochastic movements of facial features are key ingredients for generating convincing facial expressions. Although such small variations have been simulated using noise functions in many graphics applications, modulating noise functions to match natural variations induced from the affective states and the personality of characters is difficult and not intuitive. We present a technique for generating subtle expressive facial gestures (facial expressions and head motion) semi-automatically from motion capture data. Our approach is based on Markov random fields that are simulated in two levels. In the lower level, the coordinated movements of facial features are captured, parameterized, and transferred to synthetic faces using basis shapes. The upper level represents independent stochastic behavior of facial features. The experimental results show that our system generates expressive facial gestures synchronized with input speech.
引用
收藏
页码:381 / 388
页数:8
相关论文
共 50 条
  • [31] Keyframe Extraction from Motion Capture Data for Visualization
    Yang, Yang
    Zeng, Lanling
    Leung, Howard
    2016 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV 2016), 2016, : 154 - 157
  • [32] Effectiveness of Classifiers to Identify Hand Gestures with Motion Capture Coordinate Markers
    Rahman, Arifur
    Whitlock, Michael
    Abdelfattah, Eman
    2021 IEEE 11TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2021, : 779 - 784
  • [33] The Perception of Sound Movements as Expressive Gestures
    de Gotzen, Amalia
    Sikstrom, Erik
    Korsgaard, Dannie
    Serafin, Stefania
    Grani, Francesco
    SOUND, MUSIC, AND MOTION, 2014, 8905 : 509 - 517
  • [34] Datasets for the Analysis of Expressive Musical Gestures
    Sarasua, Alvaro
    Caramiaux, Baptiste
    Tanaka, Atau
    Ortiz, Miguel
    PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MOVEMENT AND COMPUTING (MOCO'17), 2015,
  • [35] Automatic Motion Generation Based on Path Editing from Motion Capture Data
    Guo, Xiaoyue
    Xu, Shibiao
    Che, Wujun
    Zhang, Xiaopeng
    TRANSACTIONS ON EDUTAINMENT IV, 2010, 6250 : 91 - +
  • [36] Extraction of keyframe from motion capture data based on motion sequence segmentation
    Zhu, Dengming
    Wang, Zhaoqi
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2008, 20 (06): : 787 - 792
  • [37] Motion Extension Based on Motion Capture Data
    Qu, Shi
    Li, Hong
    Cai, Yichao
    Chen, Zhongkuan
    Zhao, Xinshuang
    INTERNATIONAL CONFERENCE ON ELECTRICAL, CONTROL AND AUTOMATION (ICECA 2014), 2014, : 756 - 761
  • [38] Motion Tracking for Volumetric Motion Capture Data
    Roberts, Derek
    Zhu, Ying
    2019 IEEE 16TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SENSOR SYSTEMS WORKSHOPS (MASSW 2019), 2019, : 92 - 96
  • [39] Research on the Data Extraction Method of Lip Shape Feature Based on Facial Motion Capture Technology
    Zhu, Shengyin
    Wu, Zhen
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VIRTUAL REALITY, AND VISUALIZATION (AIVRV 2021), 2021, 12153
  • [40] Analysis of mimic facial movements based on motion capture
    Ben Mansour, K.
    Sarhan, F-R.
    Neiva, C.
    Godard, C.
    Devauchelle, B.
    Marin, F.
    Dakpe, S.
    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING, 2014, 17 : 78 - 79