Text2Performer: Text-Driven Human Video Generation

被引:1
|
作者
Jiang, Yuming [1 ]
Yang, Shuai [1 ]
Koh, Tong Liang [1 ]
Wu, Wayne [2 ]
Loy, Chen Change [1 ]
Liu, Ziwei [1 ]
机构
[1] Nanyang Technol Univ, S Lab, Singapore, Singapore
[2] Shanghai AI Lab, Shanghai, Peoples R China
关键词
D O I
10.1109/ICCV51070.2023.02079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Text-driven content creation has evolved to be a transformative technique that revolutionizes creativity. Here we study the task of text-driven human video generation, where a video sequence is synthesized from texts describing the appearance and motions of a target performer. Compared to general text-driven video generation, human-centric video generation requires maintaining the appearance of synthesized human while performing complex motions. In this work, we present Text2Performer to generate vivid human videos with articulated motions from texts. Text2Performer has two novel designs: 1) decomposed human representation and 2) diffusion-based motion sampler. First, we decompose the VQVAE latent space into human appearance and pose representation in an unsupervised manner by utilizing the nature of human videos. In this way, the appearance is well maintained along the generated frames. Then, we propose continuous VQ-diffuser to sample a sequence of pose embeddings. Unlike existing VQ-based methods that operate in the discrete space, continuous VQdiffuser directly outputs the continuous pose embeddings for better motion modeling. Finally, motion-aware masking strategy is designed to mask the pose embeddings spatialtemporally to enhance the temporal coherence. Moreover, to facilitate the task of text-driven human video generation, we contribute a Fashion-Text2Video dataset with manually annotated action labels and text descriptions. Extensive experiments demonstrate that Text2Performer generates high-quality human videos (up to 512 x 256 resolution) with diverse appearances and flexible motions. Our project page is https://yumingj.github.io/ projects/Text2Performer.html
引用
收藏
页码:22690 / 22700
页数:11
相关论文
共 50 条
  • [41] Text2Video: Automatic Video Generation Based on Text Scripts
    Yu, Yipeng
    Tu, Zirui
    Lu, Longyu
    Chen, Xiao
    Zhan, Hui
    Sun, Zixun
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2753 - 2755
  • [42] ControlVideo: conditional control for one-shot text-driven video editing and beyond
    Zhao, Min
    Wang, Rongzhen
    Bao, Fan
    Li, Chongxuan
    Zhu, Jun
    SCIENCE CHINA-INFORMATION SCIENCES, 2025, 68 (03)
  • [43] Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
    Wang, Yin
    Leng, Zhiying
    Li, Frederick W. B.
    Wu, Shun-Cheng
    Liang, Xiaohui
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21978 - 21987
  • [44] Utilizing Text-Video Relationships: A Text-Driven Multi-modal Fusion Framework for Moment Retrieval and Highlight Detection
    Zhou, Siyu
    Zhang, Fjwei
    Wang, Ruomei
    Su, Zhuo
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT X, 2025, 15040 : 254 - 268
  • [45] Text2Scene: Text-driven Indoor Scene Stylization with Part-aware Details
    Hwang, Inwoo
    Kim, Hyeonwoo
    Kim, Young Min
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 1890 - 1899
  • [46] Unsupervised Prompt Tuning for Text-Driven Object Detection
    He, Weizhen
    Chen, Weijie
    Chen, Binbin
    Yang, Shicai
    Xie, Di
    Lin, Luojun
    Qi, Donglian
    Zhuang, Yueting
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2651 - 2661
  • [47] Exploring Text-Driven Approaches for Online Action Detection
    Benavent-Lledo, Manuel
    Mulero-Perez, David
    Ortiz-Perez, David
    Garcia-Rodriguez, Jose
    Orts-Escolano, Sergio
    BIOINSPIRED SYSTEMS FOR TRANSLATIONAL APPLICATIONS: FROM ROBOTICS TO SOCIAL ENGINEERING, PT II, IWINAC 2024, 2024, 14675 : 55 - 64
  • [48] LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model
    Sun, Haowen
    Zheng, Ruikun
    Huang, Haibin
    Ma, Chongyang
    Huang, Hui
    Hu, Ruizhen
    PROCEEDINGS OF SIGGRAPH 2024 CONFERENCE PAPERS, 2024,
  • [49] CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes
    Youwang, Kim
    Ji-Yeon, Kim
    Oh, Tae-Hyun
    COMPUTER VISION - ECCV 2022, PT III, 2022, 13663 : 173 - 191
  • [50] Blended Diffusion for Text-driven Editing of Natural Images
    Avrahami, Omri
    Lischinski, Dani
    Fried, Ohad
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18187 - 18197