The Imaginative Generative Adversarial Network: Automatic Data Augmentation for Dynamic Skeleton-Based Hand Gesture and Human Action Recognition

被引:0
|
作者
Shen, Junxiao [1 ]
Dudley, John [1 ]
Kristensson, Per Ola [1 ]
机构
[1] Univ Cambridge, Dept Engn, Cambridge, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning approaches deliver state-of-the-art performance in recognition of spatiotemporal human motion data. However, one of the main challenges in these recognition tasks is limited available training data. Insufficient training data results in over-fitting and data augmentation is one approach to address this challenge. Existing data augmentation strategies based on scaling, shifting and interpolating offer limited generalizability and typically require detailed inspection of the dataset as well as hundreds of GPU hours for hyperparameter optimization. In this paper, we present a novel automatic data augmentation model, the Imaginative Generative Adversarial Network (GAN), that approximates the distribution of the input data and samples new data from this distribution. It is automatic in that it requires no data inspection and little hyperparameter tuning and therefore it is a low-cost and low-effort approach to generate synthetic data. We demonstrate our approach on small-scale skeleton-based datasets with a comprehensive experimental analysis. Our results show that the augmentation strategy is fast to train and can improve classification accuracy for both conventional neural networks and state-of-the-art methods.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Skeleton-based Dynamic hand gesture recognition
    De Smedt, Quentin
    Wannous, Hazem
    Vandeborre, Jean-Philippe
    PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), 2016, : 1206 - 1214
  • [2] An Efficient Graph Convolution Network for Skeleton-Based Dynamic Hand Gesture Recognition
    Peng, Sheng-Hui
    Tsai, Pei-Hsuan
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (04) : 2179 - 2189
  • [3] Decoupled Representation Network for Skeleton-Based Hand Gesture Recognition
    Zhong, Zhaochao
    Li, Yangke
    Yang, Jifang
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT II, 2022, 13530 : 469 - 480
  • [4] Adversarial Attack on Skeleton-Based Human Action Recognition
    Liu, Jian
    Akhtar, Naveed
    Mian, Ajmal
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1609 - 1622
  • [5] Decoupled and boosted learning for skeleton-based dynamic hand gesture recognition
    Li, Yangke
    Wei, Guangshun
    Desrosiers, Christian
    Zhou, Yuanfeng
    PATTERN RECOGNITION, 2024, 153
  • [6] Compact joints encoding for skeleton-based dynamic hand gesture recognition
    Li, Yangke
    Ma, Dongyang
    Yu, Yuhang
    Wei, Guangshun
    Zhou, Yuanfeng
    COMPUTERS & GRAPHICS-UK, 2021, 97 : 191 - 199
  • [7] SPD Siamese Neural Network for Skeleton-based Hand Gesture Recognition
    Akremi, Mohamed Sanim
    Slama, Rim
    Tabia, Hedi
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4, 2022, : 394 - 402
  • [8] MOTION FEATURE AUGMENTED RECURRENT NEURAL NETWORK FOR SKELETON-BASED DYNAMIC HAND GESTURE RECOGNITION
    Chen, Xinghao
    Guo, Hengkai
    Wang, Guijin
    Zhang, Li
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 2881 - 2885
  • [9] Sample Fusion Network: An End-to-End Data Augmentation Network for Skeleton-Based Human Action Recognition
    Meng, Fanyang
    Liu, Hong
    Liang, Yongsheng
    Tu, Juanhui
    Liu, Mengyuan
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (11) : 5281 - 5295
  • [10] SPATIAL-TEMPORAL DATA AUGMENTATION BASED ON LSTM AUTOENCODER NETWORK FOR SKELETON-BASED HUMAN ACTION RECOGNITION
    Tu, Juanhui
    Liu, Hong
    Meng, Fanyang
    Liu, Mengyuan
    Ding, Runwei
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 3478 - 3482