Speech synchronized facial animation that controls only the movement of the mouth is typically perceived as wooden and unnatural. We propose a method to generate additional facial expressions such as movement of the head, the eyes, and the eyebrows fully automatically from the input speech signal. This is achieved by extracting prosodic parameters such as pitch flow and power spectrum from the speech signal and using them to control facial animation parameters in accordance to results from paralinguistic research.
机构:
LMHC PLLC, Dancing Dialogue LCAT, 26 Main St, Cold Spring on Hudson, NY 10516 USALMHC PLLC, Dancing Dialogue LCAT, 26 Main St, Cold Spring on Hudson, NY 10516 USA