The semantic space for motion-captured facial expressions

被引:1
|
作者
Castillo, S. [1 ]
Legde, K. [1 ]
Cunningham, D. W. [1 ]
机构
[1] Brandenburg Univ Technol Cottbus Senftenberg, Fak Math Informat Nat Wissensch Tech MINT 1, Chair Graph Syst, Cottbus, Germany
关键词
animation; emotional models; facial expressions; motion capture; COMMUNICATION; MODEL;
D O I
10.1002/cav.1823
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We cannot not communicate! During our daily lives, we convey information verbally and nonverbally. Most of the affective meaning of a message is transferred with the help of facial expressions, and thereby, when trying to establish a realistic human-like virtual character, we should pay close attention to the animation. Motion capture is one of the most common techniques, but due to the wide range of expressions humans use, the recording time and data needed are vast. To address this problem, we propose the use of semantic spaces as they help in characterizing and positioning expressions by finding a correlation between them. In this paper, we extend prior research by providing the semantic spaces underlying real videos and motion capture data for a total of 62 conversational expressions. Our results highly correlate with previous work, showing that our new expressions were correctly recognized. Moreover, our results can be used in future work to directly project potential new recordings of these 62 expressions on the found spaces.
引用
收藏
页数:11
相关论文
共 50 条
  • [2] Method of generating coded description of human body motion from motion-captured data
    Hachimura, K
    Nakamura, M
    ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 2001, : 122 - 127
  • [3] Dance Movement Learning for Labanotation Generation Based on Motion-Captured Data
    Li, Min
    Miao, Zhenjiang
    Ma, Cong
    IEEE ACCESS, 2019, 7 : 161561 - 161572
  • [4] Human-like action recognition system on whole body motion-captured file
    Mori, T
    Tsujioka, K
    Sato, T
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 2066 - 2073
  • [5] A Multimodal Motion-Captured Corpus of Matched and Mismatched Extravert-Introvert Conversational Pairs
    Tolins, Jackson
    Liu, Kris
    Wang, Yingying
    Tree, Jean E. Fox
    Walker, Marilyn
    Neff, Michael
    LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, : 3469 - 3476
  • [6] Automatic Labanotation Generation from Motion-captured Data Based on Hidden Markov Models
    Li, Min
    Miao, Zhenjiang
    Ma, Cong
    PROCEEDINGS 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2017, : 793 - 798
  • [7] INFANT ATTENTION TO FACIAL EXPRESSIONS AND FACIAL MOTION
    BIRINGEN, ZC
    JOURNAL OF GENETIC PSYCHOLOGY, 1987, 148 (01): : 127 - 133
  • [8] Conditioned Pain Modulation (CPM) Effects Captured in Facial Expressions
    Kunz, Miriam
    Bunk, Stefanie F.
    Karmann, Anna J.
    Baer, Karl-Juergen
    Lautenbacher, Stefan
    JOURNAL OF PAIN RESEARCH, 2021, 14 : 793 - 803
  • [9] Noise Reduction in Human Motion-Captured Signals for Computer Animation based on B-Spline Filtering
    Ardestani, Mehdi Memar
    Yan, Hong
    SENSORS, 2022, 22 (12)
  • [10] The semantic space for facial communication
    Castillo, Susana
    Wallraven, Christian
    Cunningham, Douglas W.
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2014, 25 (3-4) : 225 - 233