Unified Privileged Knowledge Distillation Framework for Human Motion Prediction

被引:0
|
作者
Sun, Xiaoning [1 ]
Sun, Huaijiang [1 ]
Wei, Dong [1 ]
Wang, Jin [2 ]
Li, Bin [3 ]
Li, Weiqing [1 ]
Lu, Jianfeng [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210014, Peoples R China
[2] Nantong Univ, Sch Informat Sci & Technol, Nantong 226000, Peoples R China
[3] AiForward Co Ltd, Tianjin 300457, Peoples R China
基金
中国国家自然科学基金;
关键词
Predictive models; Training; Interpolation; Extrapolation; Task analysis; Knowledge engineering; Sun; Human motion prediction; privileged knowledge; knowledge distillation;
D O I
10.1109/TCSVT.2024.3440488
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Previous works on human motion prediction follow the pattern of building an extrapolation mapping between the sequence observed and the one to be predicted. However, the inherent difficulty of time-series extrapolation and complexity of human motion data still result in many failure cases. In this paper, we explore a longer horizon of sequence with more poses following behind, which breaks the limit in extrapolation problems that data/information on the other side of the predictive target is completely unknown. As these poses are unavailable for testing, we regard them as a privileged sequence, and propose a Two-stage Privileged Knowledge Distillation framework that incorporates privileged information in the forecasting process while avoiding direct use of it. Specifically, in the first stage, both the observed and privileged sequence are encoded for interpolation, with Privileged-sequence-Encoder (Priv-Encoder) learning privileged knowledge (PK) simultaneously. Then, in the second stage where privileged sequence is not observable, a novel PK-Simulator distills PK by approximating the behavior of Priv-Encoder, but only taking as input the observed sequence, to enable a PK-aware prediction pattern. Moreover, we present a One-stage version of this framework, using Shared Encoder that integrates the observation encoding in both interpolation and prediction branches to realize parallel training, which helps produce the most conducive PK to prediction pipeline. Experimental results show that our frameworks are model-agnostic, and can be applied to existing motion prediction models with encoder-decoder architecture to achieve improved performance.
引用
收藏
页码:12937 / 12948
页数:12
相关论文
共 50 条
  • [31] Everything2Motion: Synchronizing Diverse Inputs via a Unified Framework for Human Motion Synthesis
    Fan, Zhaoxin
    Ji, Longbin
    Xu, Pengxin
    Shen, Fan
    Chen, Kai
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1688 - 1697
  • [32] Split Knowledge Transfer in Learning Under Privileged Information Framework
    Gauraha, Niharika
    Soderdahl, Fabian
    Spjuth, Ola
    CONFORMAL AND PROBABILISTIC PREDICTION AND APPLICATIONS, VOL 105, 2019, 105
  • [33] The Atlas Benchmark: an Automated Evaluation Framework for Human Motion Prediction
    Rudenko, Andrey
    Palmieri, Luigi
    Huang, Wanting
    Lilienthal, Achim J.
    Arras, Kai O.
    2022 31ST IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN 2022), 2022, : 636 - 643
  • [34] GGTr: An Innovative Framework for Accurate and Realistic Human Motion Prediction
    Huang, Biaozhang
    Li, Xinde
    ELECTRONICS, 2023, 12 (15)
  • [35] A unified framework for financial commentary prediction
    Ozyegen, Ozan
    Malik, Garima
    Cevik, Mucahit
    Ioi, Kevin
    El Mokhtari, Karim
    INFORMATION TECHNOLOGY & MANAGEMENT, 2024,
  • [36] Combined Knowledge Distillation Framework: Breaking Down Knowledge Barriers
    Ni, Shuiping
    Wang, Wendi
    Zhu, Mingfu
    Ma, Xinliang
    Zhang, Yizhe
    Journal of Computers (Taiwan), 2024, 35 (04) : 109 - 122
  • [37] IterDE: An Iterative Knowledge Distillation Framework for Knowledge Graph Embeddings
    Liu, Jiajun
    Wang, Peng
    Shang, Ziyu
    Wu, Chenxiao
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4488 - 4496
  • [38] A Novel Two-Stage Knowledge Distillation Framework for Skeleton-Based Action Prediction
    Liu, Cuiwei
    Zhao, Xiaoxue
    Li, Zhaokui
    Yan, Zhuo
    Du, Chong
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 1918 - 1922
  • [39] Improving knowledge distillation using unified ensembles of specialized teachers
    Zaras, Adamantios
    Passalis, Nikolaos
    Tefas, Anastasios
    PATTERN RECOGNITION LETTERS, 2021, 146 (146) : 215 - 221
  • [40] A Fast Knowledge Distillation Framework for Visual Recognition
    Shen, Zhiqiang
    Xing, Eric
    COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 673 - 690