TG-Dance: TransGAN-Based Intelligent Dance Generation with Music

被引:0
|
作者
Huang, Dongjin [1 ]
Zhang, Yue [1 ]
Li, Zhenyan [1 ]
Liu, Jinhua [1 ]
机构
[1] Shanghai Univ, Shanghai Film Acad, Shanghai, Peoples R China
来源
关键词
Dance motion generation; Multimodal fusion; Upsampling; Transformer; Multi-head attention;
D O I
10.1007/978-3-031-27077-2_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Intelligent choreographic from music is a popular field of study currently. Many works use fragment splicing to generate new motions, which lacks motion diversity. When the input is only music, the frame-by-frame generation methods lead to similar motions generated by the same music. Some works improve this problem by adding motions as one of the inputs, but requires a high number of frames. In this paper, a new transformer-based neural network, TG-dance, is proposed for predicting high-quality 3D dance motions that follow the musical rhythms. We propose a new idea of multi-level expansion of motion sequences and design a new motion encoder, using a multi-level transformer-upsampling layer. The multi-head attention in the transformer allows better access to contextual information. The upsampling can greatly reduce motion frames input, and is memory friendly. We use generative adversarial network to effectively improve the quality of generated motions. We designed experiments on the publicly available large dataset AIST++. The experimental results show that TG-dance network outperforms the latest models in quantitative and qualitative. Our model inputs fewer frames of motion sequences and audio features to predict high-quality 3D dance motion sequences that follow the rhythm of the music.
引用
收藏
页码:243 / 254
页数:12
相关论文
共 50 条
  • [21] Motion to Dance Music Generation using Latent Diffusion Model
    Tan, Vanessa
    Nam, JungHyun
    Nam, Juhan
    Noh, Junyong
    [J]. PROCEEDINGS SIGGRAPH ASIA 2023 TECHNICAL COMMUNICATIONS, SA TECHNICAL COMMUNICATIONS 2023, 2023,
  • [22] MIDGET: Music Conditioned 3D Dance Generation
    Wang, Jinwu
    Mao, Wei
    Liu, Miaomiao
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 277 - 288
  • [23] A Spatio-temporal Learning for Music Conditioned Dance Generation
    Zhou, Li
    Luo, Yan
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 57 - 62
  • [24] DanceComposer: Dance-to-Music Generation Using a Progressive Conditional Music Generator
    Liang, Xiao
    Li, Wensheng
    Huang, Lifeng
    Gao, Chengying
    [J]. IEEE Transactions on Multimedia, 2024, 26 : 10237 - 10250
  • [25] Automatic generation algorithm analysis of dance movements based on music-action association
    He, Yun
    Zhang, Quancheng
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (02): : S3553 - S3561
  • [26] A deep learning model of dance generation for young children based on music rhythm and beat
    Kong, Shanshan
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (13):
  • [27] Dance theater (Choreography based on Broadway show music)
    Gold, S
    [J]. DANCE MAGAZINE, 2003, 77 (06): : 69 - 69
  • [29] Fuzzy Neural Network Model for Intelligent Course Development in Music and Dance Education
    Zhao, Lin
    Sun, Ying
    Tian, Tian
    [J]. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [30] Reinforcement Learning Based Dance Movement Generation
    Ruud, Markus Toverud
    Sandberg, Tale Hisdal
    Tranvaag, Ulrik Johan Vedde
    Karbasi, Seyed Mojtaba
    Wallace, Benedikte
    Torresen, Jim
    [J]. PROCEEDINGS OF 2022 8TH INTERNATIONAL CONFERENCE ON MOVEMENT AND COMPUTING, MOCO 2022, 2022,