Weighted averaging fusion for multi-view skeletal data and its application in action recognition

被引:14
|
作者
Azis, Nur Aziza [1 ]
Jeong, Young-Seob [1 ]
Choi, Ho-Jin [1 ]
Iraqi, Youssef [2 ]
机构
[1] Korea Adv Inst Sci & Technol, Dept Comp Sci, Daejeon, South Korea
[2] Khalifa Univ, Dept Elect & Comp Engn, Abu Dhabi, U Arab Emirates
基金
新加坡国家研究基金会;
关键词
image fusion; merging; video cameras; object tracking; feature extraction; pose estimation; skeleton-based action recognition; weighted averaging fusion; skeletal data merging; camera view merging; skeletal tracking quality; reliability evaluation; skeletal data fusion; frame level feature;
D O I
10.1049/iet-cvi.2015.0146
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing studies in skeleton-based action recognition mainly utilise skeletal data taken from a single camera. Since the quality of skeletal tracking of a single camera is noisy and unreliable, however, combining data from multiple cameras can improve the tracking quality and hence increase the recognition accuracy. In this study, the authors propose a method called weighted averaging fusion which merges skeletal data of two or more camera views. The method first evaluates the reliability of a set of corresponding joints based on their distances to the centroid, then computes the weighted average of selected joints, that is, each joint is weighted by the overall reliability of the camera reporting the joint. Such obtained, fused skeletal data are used as the input to the action recognition step. Experiments using various frame-level features and testing schemes show that more than 10% improvement can be achieved in the action recognition accuracy using these fused skeletal data as compared with the single-view case.
引用
收藏
页码:134 / 142
页数:9
相关论文
共 50 条
  • [1] Compositional action recognition with multi-view feature fusion
    Zhao, Zhicheng
    Liu, Yingan
    Ma, Lei
    [J]. PLOS ONE, 2022, 17 (04):
  • [2] Multi-View and Multi-Modal Action Recognition with Learned Fusion
    Ardianto, Sandy
    Hang, Hsueh-Ming
    [J]. 2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2018, : 1601 - 1604
  • [3] Multi-view representation learning for multi-view action recognition
    Hao, Tong
    Wu, Dan
    Wang, Qian
    Sun, Jin-Sheng
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 48 : 453 - 460
  • [4] Multi-view Recognition Using Weighted View Selection
    Spurlock, Scott
    Wu, Hui
    Souvenir, Richard
    [J]. COMPUTER VISION - ACCV 2014, PT IV, 2015, 9006 : 538 - 552
  • [5] MULTI-VIEW FUSION FOR ACTION RECOGNITION IN CHILD-ROBOT INTERACTION
    Efthymiou, Niki
    Koutras, Petros
    Filntisis, Panagiotis Paraskevas
    Potamianos, Gerasimos
    Maragos, Petros
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 455 - 459
  • [6] Weighted motion averaging for the registration of multi-view range scans
    Rui Guo
    Jihua Zhu
    Yaochen Li
    Dapeng Chen
    Zhongyu Li
    Yongqin Zhang
    [J]. Multimedia Tools and Applications, 2018, 77 : 10651 - 10668
  • [7] Weighted motion averaging for the registration of multi-view range scans
    Guo, Rui
    Zhu, Jihua
    Li, Yaochen
    Chen, Dapeng
    Li, Zhongyu
    Zhang, Yongqin
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (09) : 10651 - 10668
  • [8] Multi-view gait recognition fusion methodology
    Nizami, Imran Fareed
    Hong, Sungjun
    Lee, Heesung
    Ahn, Sungje
    Toh, Kar-Ann
    Kim, Euntai
    [J]. ICIEA 2008: 3RD IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, PROCEEDINGS, VOLS 1-3, 2008, : 2101 - 2105
  • [9] A Multi-view SAR target recognition method based on adaptive weighted decision fusion
    Zhang, Tingwei
    [J]. REMOTE SENSING LETTERS, 2023, 14 (11) : 1196 - 1205
  • [10] DVANet: Disentangling View and Action Features for Multi-View Action Recognition
    Siddiqui, Nyle
    Tirupattur, Praveen
    Shah, Mubarak
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5, 2024, : 4873 - 4881