DiffDance: Cascaded Human Motion Diffusion Model for Dance Generation

被引:4
|
作者
Qi, Qiaosong [1 ]
Zhuo, Le [2 ]
Zhang, Aixi [1 ]
Liao, Yue [2 ]
Fang, Fei [1 ]
Liu, Si [2 ]
Yan, Shuicheng [3 ]
机构
[1] Alibaba Grp, Beijing, Peoples R China
[2] Beihang Univ, Beijing, Peoples R China
[3] BAAI & Skywork AI, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Diffusion Model; Music-to-Dance; Conditional Generation; Multimodal Learning;
D O I
10.1145/3581783.3612307
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When hearing music, it is natural for people to dance to its rhythm. Automatic dance generation, however, is a challenging task due to the physical constraints of human motion and rhythmic alignment with target music. Conventional autoregressive methods introduce compounding errors during sampling and struggle to capture the long-term structure of dance sequences. To address these limitations, we present a novel cascaded motion diffusion model, DiffDance, designed for high-resolution, long-form dance generation. This model comprises a music-to-dance diffusion model and a sequence super-resolution diffusion model. To bridge the gap between music and motion for conditional generation, DiffDance employs a pretrained audio representation learning model to extract music embeddings and further align its embedding space to motion via contrastive loss. During training our cascaded diffusion model, we also incorporate multiple geometric losses to constrain the model outputs to be physically plausible and add a dynamic loss weight that adaptively changes over diffusion timesteps to facilitate sample diversity. Through comprehensive experiments performed on the benchmark dataset AIST++, we demonstrate that DiffDance is capable of generating realistic dance sequences that align effectively with the input music. These results are comparable to those achieved by state-of-the-art autoregressive methods.
引用
收藏
页码:1374 / 1382
页数:9
相关论文
共 50 条
  • [21] Dance motion generation by recombination of body parts from motion source
    Lee, Minho
    Lee, Kyogu
    Lee, Mihee
    Park, Jaeheung
    INTELLIGENT SERVICE ROBOTICS, 2018, 11 (02) : 139 - 148
  • [22] Dance motion generation by recombination of body parts from motion source
    Minho Lee
    Kyogu Lee
    Mihee Lee
    Jaeheung Park
    Intelligent Service Robotics, 2018, 11 : 139 - 148
  • [23] DivDiff: A Conditional Diffusion Model for Diverse Human Motion Prediction
    Yu, Hua
    Hou, Yaqing
    Pei, Wenbin
    Ong, Yew-Soon
    Zhang, Qiang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1848 - 1859
  • [24] PhysDiff: Physics-Guided Human Motion Diffusion Model
    Yuan, Ye
    Song, Jiaming
    Iqbal, Umar
    Vahdat, Arash
    Kautz, Jan
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15964 - 15975
  • [25] Transformer-based partner dance motion generation
    Wu, Ying
    Wu, Zizhao
    Ji, Chengtao
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [26] Perceptually motivated automatic dance motion generation for music
    Kim, Jae Woo
    Fouad, Hesham
    Sibert, John L.
    Hahn, James K.
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2009, 20 (2-3) : 375 - 384
  • [27] Dance2Music-Diffusion: leveraging latent diffusion models for music generation from dance videos
    Zhang, Chaoyang
    Hua, Yan
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2024, 2024 (01):
  • [28] Context-Aware Head-and-Eye Motion Generation with Diffusion Model
    Shen, Yuxin
    Xu, Manjie
    Liang, Wei
    2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES, VR 2024, 2024, : 157 - 167
  • [29] Mutual Prediction Model for Predicting Information for Human Motion Generation
    Nishimura, Tomoki
    Hara, Akiyoshi
    Miyamoto, Hiroki
    Furukawa, Masahiro
    Maeda, Taro
    2020 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2020, : 687 - 692
  • [30] Stochastic human motion prediction using a quantized conditional diffusion model
    Huang, Biaozhang
    Li, Xinde
    Hu, Chuanfei
    Li, Heqing
    KNOWLEDGE-BASED SYSTEMS, 2025, 309