Automatic Generation of Dance and Facial Expressions Linked to Music using HMM

被引:0
|
作者
Sato, Taiki [1 ]
Osana, Yuko [1 ]
机构
[1] Tokyo Univ Technol, Sch Comp Sci, 1404-1 Katakura, Tokyo 1920982, Japan
关键词
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose automatic generation of dance and facial expression using Hidden Markov Model. In the proposed system, first, the acoustic feature for each analysis interval whose length is one bar is extracted from the music given by an user. As acoustic features, Mel-Frequency Cepstrum Coefficients (MFCC) are used. Similar phrases are often repeated in music, and similar dance and expression actions are often assigned to similar phrases. Even in the proposed system, similar dance actions and facial expression actions are assigned to sections where the acoustic features are similar. The dance motion that is the basic form when generating similar dance motions is called a dance vocabulary, and the expression motion that is the basic form when generating similar facial motions is called an expression vocabulary. In the proposed system, the dance motion and the facial expression motion for each analysis interval are classified using the K-means++ method, and the vocabulary is associated with the classified class labels. Next, a Hidden Markov Model is used to determine a sequence of dance vocabulary from the correspondence between the acoustic feature and the dance vocabulary. Finally, it determines by randomly selecting the corresponding motion for the determined dance vocabulary and facial expression vocabulary, and interpolates, combines and outputs the motion during the analysis interval. Computer experiments were conducted to confirm that automatic generation of dance and facial expression can be performed in the proposed system.
引用
收藏
页码:3999 / 4006
页数:8
相关论文
共 50 条
  • [1] D2MNet for music generation joint driven by facial expressions and dance movements
    Huang, Jiang
    Huang, Xianglin
    Yang, Lifang
    Tao, Zhulin
    [J]. ARRAY, 2024, 22
  • [2] Automatic Generation of Robot Facial Expressions with Preferences
    Tang, Bing
    Cao, Rongyun
    Chen, Rongya
    Chen, Xiaoping
    Hua, Bei
    Wu, Feng
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 7606 - 7613
  • [3] Recognition of facial expressions using HMM with continuous output probabilities
    Otsuka, T
    Ohya, J
    [J]. RO-MAN '96 - 5TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 1996, : 323 - 328
  • [4] Automatic generation of dynamic expressions for music conductor gestures
    Gao, Wei
    [J]. BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2019, 125 : 211 - 211
  • [5] Perceptually motivated automatic dance motion generation for music
    Kim, Jae Woo
    Fouad, Hesham
    Sibert, John L.
    Hahn, James K.
    [J]. COMPUTER ANIMATION AND VIRTUAL WORLDS, 2009, 20 (2-3) : 375 - 384
  • [6] Recognizing multiple persons' facial expressions using HMM based on automatic extraction of significant frames from image sequences
    Otsuka, T
    Ohya, J
    [J]. INTERNATIONAL CONFERENCE ON IMAGE PROCESSING - PROCEEDINGS, VOL II, 1997, : 546 - 549
  • [7] The Automatic Detection of Cognition Using EEG and Facial Expressions
    El Kerdawy, Mohamed
    El Halaby, Mohamed
    Hassan, Afnan
    Maher, Mohamed
    Fayed, Hatem
    Shawky, Doaa
    Badawi, Ashraf
    [J]. SENSORS, 2020, 20 (12) : 1 - 32
  • [8] Automatic generation of non-verbal facial expressions from speech
    Albrecht, I
    Haber, J
    Seidel, HP
    [J]. ADVANCES IN MODELLING, ANIMATION AND RENDERING, 2002, : 283 - 293
  • [9] Automatic Analysis of Facial Expressions
    Pantic, Maja
    [J]. HRI'14: PROCEEDINGS OF THE 2014 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2014, : 390 - 390
  • [10] Automatic recognition of facial expressions using Bayesian belief networks
    Datcu, D
    Rothkrantz, LJM
    [J]. 2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7, 2004, : 2209 - 2214