Weakly-supervised action localization based on seed superpixels

被引:4
|
作者
Ullah, Sami [1 ]
Bhatti, Naeem [1 ]
Qasim, Tehreem [1 ]
Hassan, Najmul [1 ]
Zia, Muhammad [1 ]
机构
[1] Quaid I Azam Univ, Dept Elect, COMSIP Lab, Islamabad 45320, Pakistan
关键词
Action localization; Action recognition; Feature extraction; Seed superpixels; HUMAN ACTION RECOGNITION;
D O I
10.1007/s11042-020-09992-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we present action localization based on weak supervision with seed superpixels. In order to benefit from the superpixel segmentation and to learn a priori knowledge we select the seed superpixels from the action and non-action areas of few video frames of an action sequence equally. We compute correlation, joint entropy and joint histogram as the features of the video frame superpixels based on the optical flow magnitudes and intensity information. An SVM is trained with the action and non-action seed superpixels features and is used to classify the video frame superpixels as action and non-action. The superpixels classified as action provide the action localization. The localized action superpixels are used to recognize the action class by the Dendrogram-SVM based on the already extracted features. We evaluate the performance of the proposed approach for action localization and recognition using UCF sports and UCF-101 actions datasets, which demonstrates that the seed superpixels provide effective action localization and in turn facilitates to recognize the action class.
引用
收藏
页码:6203 / 6220
页数:18
相关论文
共 50 条
  • [31] Deep cascaded action attention network for weakly-supervised temporal action localization
    Hui-fen Xia
    Yong-zhao Zhan
    Multimedia Tools and Applications, 2023, 82 : 29769 - 29787
  • [32] Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization
    Lee, Pilhyeon
    Byun, Hyeran
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13628 - 13637
  • [33] Deep cascaded action attention network for weakly-supervised temporal action localization
    Xia, Hui-fen
    Zhan, Yong-zhao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (19) : 29769 - 29787
  • [34] ACGNet: Action Complement Graph Network for Weakly-Supervised Temporal Action Localization
    Yang, Zichen
    Qin, Jie
    Huang, Di
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3090 - 3098
  • [35] Proposal-based Multiple Instance Learning for Weakly-supervised Temporal Action Localization
    Ren, Huan
    Yang, Wenfei
    Zhang, Tianzhu
    Zhang, Yongdong
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2394 - 2404
  • [36] CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning
    Zhang, Can
    Cao, Meng
    Yang, Dongming
    Chen, Jie
    Zou, Yuexian
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16005 - 16014
  • [37] Adversarial Seeded Sequence Growing for Weakly-Supervised Temporal Action Localization
    Zhang, Chengwei
    Xu, Yunlu
    Cheng, Zhanzhan
    Niu, Yi
    Pu, Shiliang
    Wu, Fei
    Zou, Futai
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 738 - 746
  • [38] Self-supervised temporal adaptive learning for weakly-supervised temporal action localization
    Sheng, Jinrong
    Yu, Jiaruo
    Li, Ziqiang
    Li, Ao
    Ge, Yongxin
    INFORMATION SCIENCES, 2025, 705
  • [39] Weakly-supervised Temporal Action Localization with Adaptive Clustering and Refining Network
    Ren, Hao
    Ran, Wu
    Liu, Xingson
    Ren, Haoran
    Lu, Hong
    Zhang, Rui
    Jin, Cheng
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1008 - 1013
  • [40] Dual-Evidential Learning for Weakly-supervised Temporal Action Localization
    Chen, Mengyuan
    Gao, Junyu
    Yang, Shicai
    Xu, Changsheng
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 192 - 208