NAOMI: Non-Autoregressive Multiresolution Sequence Imputation

被引:0
|
作者
Liu, Yukai [1 ]
Yu, Rose [2 ]
Zheng, Stephan [1 ]
Zhan, Eric [1 ]
Yue, Yisong [1 ]
机构
[1] CALTECH, Pasadena, CA 91125 USA
[2] Northeastern Univ, Boston, MA 02115 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Missing value imputation is a fundamental problem in spatiotemporal modeling, from motion tracking to the dynamics of physical systems. Deep autoregressive models suffer from error propagation which becomes catastrophic for imputing long-range sequences. In this paper, we take a non-autoregressiveapproach and propose a novel deep generative model: Non-AutOregressive Multiresolution Imputation (NAOMI) to impute long-range sequences given arbitrary missing patterns. NAOMI exploits the multiresolution structure of spatiotemporal data and decodes recursively from coarse to fine-grained resolutions using a divide-and-conquer strategy. We further enhance our model with adversarial training. When evaluated extensively on benchmark datasets from systems of both deterministic and stochastic dynamics. In our experiments, NAOMI demonstrates significant improvement in imputation accuracy (reducing average error by 60% compared to autoregressive counterparts) and generalization for long-range sequences.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] NON-AUTOREGRESSIVE SEQUENCE-TO-SEQUENCE VOICE CONVERSION
    Hayashi, Tomoki
    Huang, Wen-Chin
    Kobayashi, Kazuhiro
    Toda, Tomoki
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7068 - 7072
  • [2] Deep Equilibrium Non-Autoregressive Sequence Learning
    Zheng, Zaixiang
    Zhou, Yi
    Zhou, Hao
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 11763 - 11781
  • [3] A Study of Non-autoregressive Model for Sequence Generation
    Ren, Yi
    Liu, Jinglin
    Tan, Xu
    Zhao, Zhou
    Zhao, Sheng
    Liu, Tie-Yan
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 149 - 159
  • [4] AN INVESTIGATION OF STREAMING NON-AUTOREGRESSIVE SEQUENCE-TO-SEQUENCE VOICE CONVERSION
    Hayashi, Tomoki
    Kobayashi, Kazuhiro
    Toda, Tomoki
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6802 - 6806
  • [5] AN INVESTIGATION OF STREAMING NON-AUTOREGRESSIVE SEQUENCE-TO-SEQUENCE VOICE CONVERSION
    Hayashi, Tomoki
    Kobayashi, Kazuhiro
    Toda, Tomoki
    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2022, 2022-May : 6802 - 6806
  • [6] FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow
    Ma, Xuezhe
    Zhou, Chunting
    Li, Xian
    Neubig, Graham
    Hovy, Eduard
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 4282 - 4292
  • [7] Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement
    Lee, Jason
    Mansimov, Elman
    Cho, Kyunghyun
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 1173 - 1182
  • [8] Translating Images to Road Network: A Non-Autoregressive Sequence-to-Sequence Approach
    Lu, Jiachen
    Peng, Renyuan
    Cai, Xinyue
    Xu, Hang
    Li, Hongyang
    Wen, Feng
    Zhang, Wei
    Zhang, Li
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 23 - 33
  • [9] Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer
    Tokarchuk, Evgeniia
    Rosendahl, Jan
    Wang, Weiyue
    Petrushkov, Pavel
    Lancewicki, Tomer
    Khadivi, Shahram
    Ney, Hermann
    IWSLT 2021: THE 18TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE TRANSLATION, 2021, : 276 - 286
  • [10] Improving Autoregressive NMT with Non-Autoregressive Model
    Zhou, Long
    Zhang, Jiajun
    Zong, Chengqing
    WORKSHOP ON AUTOMATIC SIMULTANEOUS TRANSLATION CHALLENGES, RECENT ADVANCES, AND FUTURE DIRECTIONS, 2020, : 24 - 29