MMM: Generative Masked Motion Model

被引:1
|
作者
Pinyoanuntapong, Ekkasit [1 ]
Wang, Pu [1 ]
Lee, Minwoo [1 ]
Chen, Chen [2 ]
机构
[1] Univ North Carolina Charlotte, Charlotte, NC 28223 USA
[2] Univ Cent Florida, Orlando, FL 32816 USA
关键词
D O I
10.1109/CVPR52733.2024.00153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advances in text-to-motion generation using diffusion and autoregressive models have shown promising results. However, these models often suffer from a trade-off between real-time performance, high fidelity, and motion editability. To address this gap, we introduce MMM, a novel yet simple motion generation paradigm based on Masked Motion Model. MMM consists of two key components: (1) a motion tokenizer that transforms 3D human motion into a sequence of discrete tokens in latent space, and (2) a conditional masked motion transformer that learns to predict randomly masked motion tokens, conditioned on the precomputed text tokens. By attending to motion and text tokens in all directions, MMM explicitly captures inherent dependency among motion tokens and semantic mapping between motion and text tokens. During inference, this allows parallel and iterative decoding of multiple motion tokens that are highly consistent with fine-grained text descriptions, therefore simultaneously achieving high-fidelity and high-speed motion generation. In addition, MMM has innate motion editability. By simply placing mask tokens in the place that needs editing, MMM automatically fills the gaps while guaranteeing smooth transitions between editing and non-editing parts. Extensive experiments on the HumanML3D and KIT-ML datasets demonstrate that MMM surpasses current leading methods in generating high-quality motion (evidenced by superior FID scores of 0.08 and 0.429), while offering advanced editing features such as body-part modification, motion in-betweening, and the synthesis of long motion sequences. In addition, MMM is two orders of magnitude faster on a single mid-range GPU than editable motion diffusion models. Our project page is available at https://exitudio.github.io/MMM-page/.
引用
收藏
页码:1546 / 1555
页数:10
相关论文
共 50 条
  • [41] A generative transformational model of the detection of coherent motion of a single element in a randomly moving field
    Vickers, D
    Preiss, K
    AUSTRALIAN JOURNAL OF PSYCHOLOGY, 2002, 54 (01) : 64 - 64
  • [42] Aftershock ground motion prediction model based on conditional convolutional generative adversarial networks
    Shen, Jiaxu
    Ni, Bo
    Ding, Yinjun
    Xiong, Jiecheng
    Zhong, Zilan
    Chen, Jun
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [43] Magnetizable molecular model (MMM) -: behaviour of enantiomers in magnetic fields
    Szabó, A
    Erdélyi, B
    JOURNAL OF MOLECULAR STRUCTURE-THEOCHEM, 2005, 715 (1-3): : 215 - 217
  • [44] The membrane, magnesium, mitosis (MMM) model of cell proliferation control
    Rubin, H
    MAGNESIUM RESEARCH, 2005, 18 (04) : 268 - 274
  • [45] MGMAE: Motion Guided Masking for Video Masked Autoencoding
    Huang, Bingkun
    Zhao, Zhiyu
    Zhang, Guozhen
    Qiao, Yu
    Wang, Limin
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 13447 - 13458
  • [46] A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis
    Mascaro, Esteve Valls
    Ahn, Hyemin
    Lee, Dongheui
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5261 - 5269
  • [47] Masked Generative Light Field Prompting for Pixel-Level Structure Segmentations
    Wang, Mianzhao
    Shi, Fan
    Cheng, Xu
    Chen, Shengyong
    RESEARCH, 2024, 7
  • [48] Magnetic charge model for 3D MMM signals
    Shi Pengpeng
    Zheng Xiaojing
    NONDESTRUCTIVE TESTING AND EVALUATION, 2016, 31 (01) : 45 - 60
  • [49] Masked Language Model Scoring
    Salazar, Julian
    Liang, Davis
    Nguyen, Toan Q.
    Kirchhoff, Katrin
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 2699 - 2712
  • [50] Explainable time series anomaly detection using masked latent generative modeling
    Lee, Daesoo
    Malacarne, Sara
    Aune, Erlend
    PATTERN RECOGNITION, 2024, 156