Guided Motion Diffusion for Controllable Human Motion Synthesis

被引:6
|
作者
Karunratanakul, Korrawe [1 ]
Preechakul, Konpat [2 ]
Suwajanakorn, Supasorn [2 ]
Tang, Siyu [1 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] VISTEC, Pa Yup Nai, Thailand
基金
瑞士国家科学基金会;
关键词
D O I
10.1109/ICCV51070.2023.00205
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Denoising diffusion models have shown great promise in human motion synthesis conditioned on natural language descriptions. However, integrating spatial constraints, such as pre-defined motion trajectories and obstacles, remains a challenge despite being essential for bridging the gap between isolated human motion and its surrounding environment. To address this issue, we propose Guided Motion Diffusion (GMD), a method that incorporates spatial constraints into the motion generation process. Specifically, we propose an effective feature projection scheme that manipulates motion representation to enhance the coherency between spatial information and local poses. Together with a new imputation formulation, the generated motion can reliably conform to spatial constraints such as global motion trajectories. Furthermore, given sparse spatial constraints (e.g. sparse keyframes), we introduce a new dense guidance approach to turn a sparse signal, which is susceptible to being ignored during the reverse steps, into denser signals to guide the generated motion to the given constraints. Our extensive experiments justify the development of GMD, which achieves a significant improvement over state-of-the-art methods in text-based motion generation while allowing control of the synthesized motions with spatial constraints.
引用
收藏
页码:2151 / 2162
页数:12
相关论文
共 50 条
  • [1] Object Motion Guided Human Motion Synthesis
    Li, Jiaman
    Wu, Jiajun
    Liu, C. Karen
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2023, 42 (06):
  • [2] Controllable Motion Synthesis and Reconstruction with Autoregressive Diffusion Models
    Yin, Wenjie
    Tu, Ruibo
    Yin, Hang
    Kragic, Danica
    Kjellstrom, Hedvig
    Bjorkman, Marten
    [J]. 2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 1102 - 1108
  • [3] PhysDiff: Physics-Guided Human Motion Diffusion Model
    Yuan, Ye
    Song, Jiaming
    Iqbal, Umar
    Vahdat, Arash
    Kautz, Jan
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15964 - 15975
  • [4] Controllable motion synthesis in a gaseous medium
    Shi, L
    Yu, YZ
    Wojtan, C
    Chenney, S
    [J]. VISUAL COMPUTER, 2005, 21 (07): : 474 - 487
  • [5] Controllable motion synthesis in a gaseous medium
    Lin Shi
    Yizhou Yu
    Christopher Wojtan
    Stephen Chenney
    [J]. The Visual Computer, 2005, 21 : 474 - 487
  • [6] Language-guided Human Motion Synthesis with Atomic Actions
    Zhai, Yuanhao
    Huang, Mingzhen
    Luan, Tianyu
    Dong, Lu
    Nwogu, Ifeoma
    Lyu, Siwei
    Doermann, David
    Yuan, Junsong
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5262 - 5271
  • [7] Enhanced Fine-Grained Motion Diffusion for Text-Driven Human Motion Synthesis
    Wei, Dong
    Sun, Xiaoning
    Sun, Huaijiang
    Hu, Shengxiang
    Li, Bin
    Li, Weiqing
    Lu, Jianfeng
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5876 - 5884
  • [8] Controllable Variation Synthesis for Surface Motion Capture
    Boukhayma, Adnane
    Boyer, Edmond
    [J]. PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, : 309 - 317
  • [9] Searching Motion Graphs for Human Motion Synthesis
    Liu, Chenchen
    Mu, Yadong
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 871 - 879
  • [10] Automatic Motion Segmentation for Human Motion Synthesis
    Schulz, Sebastian
    Woerner, Annika
    [J]. ARTICULATED MOTION AND DEFORMABLE OBJECTS, PROCEEDINGS, 2010, 6169 : 182 - 191