Autoregressive Stylized Motion Synthesis with Generative Flow

被引:27
|
作者
Wen, Yu-Hui [1 ]
Yang, Zhipeng [2 ]
Fu, Hongbo [3 ]
Gao, Lin [2 ,4 ]
Sun, Yanan [1 ]
Liu, Yong-Jin [1 ]
机构
[1] Tsinghua Univ, BNRist, CS Dept, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] City Univ Hong Kong, Sch Creat Media, Hong Kong, Peoples R China
[4] Chinese Acad Sci, Beijing Key Lab Mobile Comp & Pervas Device, ICT, Beijing, Peoples R China
关键词
D O I
10.1109/CVPR46437.2021.01340
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Motion style transfer is an important problem in many computer graphics and computer vision applications, including human animation, games, and robotics. Most existing deep learning methods for this problem are supervised and trained by registered motion pairs. In addition, these methods are often limited to yielding a deterministic output, given a pair of style and content motions. In this paper, we propose an unsupervised approach for motion style transfer by synthesizing stylized motions autoregressively using a generative flow model M. M is trained to maximize the exact likelihood of a collection of unlabeled motions, based on an autoregressive context of poses in previous frames and a control signal representing the movement of a root joint. Thanks to invertible flow transformations, latent codes that encode deep properties of motion styles are efficiently inferred by M. By combining the latent codes (from an input style motion S) with the autoregressive context and control signal (from an input content motion C), M outputs a stylized motion which transfers style from S to C. Moreover, our model is probabilistic and is able to generate various plausible motions with a specific style. We evaluate the proposed model on motion capture datasets containing different human motion styles. Experiment results show that our model outperforms the state-of-the-art methods, despite not requiring manually labeled training data.
引用
收藏
页码:13607 / 13616
页数:10
相关论文
共 50 条
  • [41] Fitting Autoregressive Graph Generative Models through Maximum Likelihood Estimation
    Han, Xu
    Chen, Xiaohui
    Ruiz, Francisco J. R.
    Liu, Li-Ping
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [42] Efficient generative modeling of protein sequences using simple autoregressive models
    Trinquier, Jeanne
    Uguzzoni, Guido
    Pagnani, Andrea
    Zamponi, Francesco
    Weigt, Martin
    NATURE COMMUNICATIONS, 2021, 12 (01)
  • [43] A Semi-Autoregressive Graph Generative Model for Dependency Graph Parsing
    Ma, Ye
    Sun, Mingming
    Li, Ping
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4218 - 4230
  • [44] Efficient generative modeling of protein sequences using simple autoregressive models
    Jeanne Trinquier
    Guido Uguzzoni
    Andrea Pagnani
    Francesco Zamponi
    Martin Weigt
    Nature Communications, 12
  • [45] Efficient generative model for motion deblurring
    Xiang, Han
    Sang, Haiwei
    Sun, Lilei
    Zhao, Yong
    JOURNAL OF ENGINEERING-JOE, 2020, 2020 (13): : 491 - 494
  • [46] Generative Motion: Queer Ecology and Avatar
    Anglin, Sallie
    JOURNAL OF POPULAR CULTURE, 2015, 48 (02): : 341 - 354
  • [47] MMM: Generative Masked Motion Model
    Pinyoanuntapong, Ekkasit
    Wang, Pu
    Lee, Minwoo
    Chen, Chen
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 1546 - 1555
  • [48] Generative model for human motion recognition
    Excell, David
    Cemgil, A. Taylan
    Fitzgerald, William J.
    PROCEEDINGS OF THE 5TH INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS, 2007, : 423 - 428
  • [49] Deep Generative Filter for Motion Deblurring
    Ramakrishnan, Sainandan
    Pachori, Shubham
    Gangopadhyay, Aalok
    Raman, Shanmuganathan
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 2993 - 3000
  • [50] Example-Based Synthesis of Stylized Facial Animations
    Fiser, Jakub
    Jamriska, Ondrej
    Simons, David
    Shechtman, Eli
    Lu, Jingwan
    Asente, Paul
    Lukac, Michal
    Sykora, Daniel
    ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04):