Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition

被引:8
|
作者
Sun, Bin [1 ]
Kong, Dehui [1 ]
Wang, Shaofan [1 ]
Wang, Lichun [1 ]
Yin, Baocai [1 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Action recognition; multi-view; sparse representation; transfer learning; REPRESENTATION; SURVEILLANCE;
D O I
10.1145/3434746
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-view human action recognition remains a challenging problem due to large view changes. In this article, we propose a transfer learning-based framework called transferable dictionary learning and view adaptation (TDVA) model for multi-view human action recognition. In the transferable dictionary learning phase, TDVA learns a set of view-specific transferable dictionaries enabling the same actions from different views to share the same sparse representations, which can transfer features of actions from different views to an intermediate domain. In the view adaptation phase, TDVA comprehensively analyzes global, local, and individual characteristics of samples, and jointly learns balanced distribution adaptation, locality preservation, and discrimination preservation, aiming at transferring sparse features of actions of different views from the intermediate domain to a common domain. In other words, TDVA progressively bridges the distribution gap among actions from various views by these two phases. Experimental results on IXMAS, ACT4(2), and NUCLA action datasets demonstrate that TDVA outperforms state-of-the-art methods.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Task-driven joint dictionary learning model for multi-view human action recognition
    Liu, Zhigang
    Wang, Lei
    Yin, Ziyang
    Xue, Yanbo
    DIGITAL SIGNAL PROCESSING, 2022, 126
  • [2] Multi-view representation learning for multi-view action recognition
    Hao, Tong
    Wu, Dan
    Wang, Qian
    Sun, Jin-Sheng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 48 : 453 - 460
  • [3] Multi-view discriminative and structured dictionary learning with group sparsity for human action recognition
    Gao, Z.
    Zhang, H.
    Xu, G. P.
    Xue, Y. B.
    Hauptmann, A. G.
    SIGNAL PROCESSING, 2015, 112 : 83 - 97
  • [4] Multi-view SAR Target Recognition Based on Joint Dictionary and Classifier Learning
    Ren, Haohao
    Yu, Xuelian
    Zou, Lin
    Zhou, Yun
    Wang, Xuegang
    2019 IEEE RADAR CONFERENCE (RADARCONF), 2019,
  • [5] Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning
    Li, Bing
    Yuan, Chunfeng
    Xiong, Weihua
    Hu, Weiming
    Peng, Houwen
    Ding, Xinmiao
    Maybank, Steve
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2554 - 2560
  • [6] Neural representation and learning for multi-view human action recognition
    Iosifidis, Alexandros
    Tefas, Anastasios
    Pitas, Ioannis
    2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [7] Jointly Learning Multi-view Features for Human Action Recognition
    Wang, Ruoshi
    Liu, Zhigang
    Yin, Ziyang
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 4858 - 4861
  • [8] Uncorrelated Multi-View Discrimination Dictionary Learning for Recognition
    Jing, Xiao-Yuan
    Hu, Rui-Min
    Wu, Fei
    Chen, Xi-Lin
    Liu, Qian
    Yao, Yong-Fang
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 2787 - 2795
  • [9] Cross-View Action Recognition via Transferable Dictionary Learning
    Zheng, Jingjing
    Jiang, Zhuolin
    Chellappa, Rama
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (06) : 2542 - 2556
  • [10] Multi-view human action recognition: A survey
    Iosifidis, Alexandros
    Tefas, Anastasios
    Pitas, Ioannis
    2013 NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATION HIDING AND MULTIMEDIA SIGNAL PROCESSING (IIH-MSP 2013), 2013, : 522 - 525