Collaborative Foreground, Background, and Action Modeling Network for Weakly Supervised Temporal Action Localization

被引:9
|
作者
Moniruzzaman, Md. [1 ]
Yin, Zhaozheng [2 ]
机构
[1] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
[2] SUNY Stony Brook, Dept Comp Sci, Dept Biomed Informat, Stony Brook, NY 11794 USA
基金
美国国家科学基金会;
关键词
Temporal action localization; foreground modeling; background modeling; action modeling;
D O I
10.1109/TCSVT.2023.3272891
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we explore the problem of Weakly supervised Temporal Action Localization (W-TAL), where the task is to localize the temporal boundaries of all action instances in an untrimmed video with only video-level supervision. The existing W-TAL methods achieve a good action localization performance by separating the discriminative action and background frames. However, there is still a large performance gap between the weakly and fully supervised methods. The main reason comes from that there are plenty of ambiguous action and background frames in addition to the discriminative action and background frames. Due to the lack of temporal annotations in W-TAL, the ambiguous background frames may be localized as foreground and the ambiguous action frames may be suppressed as background, which result in false positives and false negatives, respectively. In this paper, we introduce a novel collaborative Foreground, Background, and Action Modeling Network (FBA Net) to suppress the background (i.e., both the discriminative and ambiguous background) frames, and localize the actual action-related (i.e., both the discriminative and ambiguous action) frames as foreground, for the precise temporal action localization. We design our FBA-Net with three branches: the foreground modeling (FM) branch, the background modeling (BM) branch, and the class-specific action and background modeling (CM) branch. The CM branch learns to highlight the video frames related to C action classes, and separate the action-related frames of C action classes from the (C + 1)th background class. The collaboration between FM and CM regularizes the consistency between the FM and the C action classes of CM, which reduces the false negative rate by localizing different actual-action-related (i.e., both the discriminative and ambiguous action) frames in a video as foreground. On the other hand, the collaboration between BM and CM regularizes the consistency between the BM and the (C + 1)th background class of CM, which reduces the false positive rate by suppressing both the discriminative and ambiguous background frames. Furthermore, the collaboration between FM and BM enforces more effective foreground background separation. To evaluate the effectiveness of our FBA-Net, we perform extensive experiments on two challenging datasets, THUMOS14 and ActivityNet1.3. The experiments show that our FBA-Net attains superior results.
引用
收藏
页码:6939 / 6951
页数:13
相关论文
共 50 条
  • [31] Learning Background Suppression Model for Weakly-supervised Temporal Action Localization
    Liu, Mengxue
    Gao, Xiangjun
    Ge, Fangzhen
    Liu, Huaiyu
    Li, Wenjing
    IAENG International Journal of Computer Science, 2021, 48 (04):
  • [32] Dynamic Graph Modeling for Weakly-Supervised Temporal Action Localization
    Shi, Haichao
    Zhang, Xiao-Yu
    Li, Changsheng
    Gong, Lixing
    Li, Yong
    Bao, Yongjun
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3820 - 3828
  • [33] Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization
    Liu, Daochang
    Jiang, Tingting
    Wang, Yizhou
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 1298 - 1307
  • [34] Dynamic Graph Modeling for Weakly-Supervised Temporal Action Localization
    Shi, Haichao
    Zhang, Xiao-Yu
    Li, Changsheng
    Gong, Lixing
    Li, Yong
    Bao, Yongjun
    MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia, 2022, : 3820 - 3828
  • [35] Modeling Sub-Actions for Weakly Supervised Temporal Action Localization
    Huang, Linjiang
    Huang, Yan
    Ouyang, Wanli
    Wang, Liang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 5154 - 5167
  • [36] Uncertainty Guided Collaborative Training for Weakly Supervised and Unsupervised Temporal Action Localization
    Yang, Wenfei
    Zhang, Tianzhu
    Zhang, Yongdong
    Wu, Feng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 5252 - 5267
  • [37] Deep snippet selective network for weakly supervised temporal action localization
    Ge, Yongxin
    Qin, Xiaolei
    Yang, Dan
    Jagersand, Martin
    PATTERN RECOGNITION, 2021, 110
  • [38] Feature Matching Network for Weakly-Supervised Temporal Action Localization
    Dou, Peng
    Zhou, Wei
    Liao, Zhongke
    Hu, Haifeng
    PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 459 - 471
  • [39] Cascaded Pyramid Mining Network for Weakly Supervised Temporal Action Localization
    Su, Haisheng
    Zhao, Xu
    Lin, Tianwei
    COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 558 - 574
  • [40] Spatial–temporal correlations learning and action-background jointed attention for weakly-supervised temporal action localization
    Huifen Xia
    Yongzhao Zhan
    Keyang Cheng
    Multimedia Systems, 2022, 28 : 1529 - 1541