Collaborative Foreground, Background, and Action Modeling Network for Weakly Supervised Temporal Action Localization

被引:9
|
作者
Moniruzzaman, Md. [1 ]
Yin, Zhaozheng [2 ]
机构
[1] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
[2] SUNY Stony Brook, Dept Comp Sci, Dept Biomed Informat, Stony Brook, NY 11794 USA
基金
美国国家科学基金会;
关键词
Temporal action localization; foreground modeling; background modeling; action modeling;
D O I
10.1109/TCSVT.2023.3272891
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, we explore the problem of Weakly supervised Temporal Action Localization (W-TAL), where the task is to localize the temporal boundaries of all action instances in an untrimmed video with only video-level supervision. The existing W-TAL methods achieve a good action localization performance by separating the discriminative action and background frames. However, there is still a large performance gap between the weakly and fully supervised methods. The main reason comes from that there are plenty of ambiguous action and background frames in addition to the discriminative action and background frames. Due to the lack of temporal annotations in W-TAL, the ambiguous background frames may be localized as foreground and the ambiguous action frames may be suppressed as background, which result in false positives and false negatives, respectively. In this paper, we introduce a novel collaborative Foreground, Background, and Action Modeling Network (FBA Net) to suppress the background (i.e., both the discriminative and ambiguous background) frames, and localize the actual action-related (i.e., both the discriminative and ambiguous action) frames as foreground, for the precise temporal action localization. We design our FBA-Net with three branches: the foreground modeling (FM) branch, the background modeling (BM) branch, and the class-specific action and background modeling (CM) branch. The CM branch learns to highlight the video frames related to C action classes, and separate the action-related frames of C action classes from the (C + 1)th background class. The collaboration between FM and CM regularizes the consistency between the FM and the C action classes of CM, which reduces the false negative rate by localizing different actual-action-related (i.e., both the discriminative and ambiguous action) frames in a video as foreground. On the other hand, the collaboration between BM and CM regularizes the consistency between the BM and the (C + 1)th background class of CM, which reduces the false positive rate by suppressing both the discriminative and ambiguous background frames. Furthermore, the collaboration between FM and BM enforces more effective foreground background separation. To evaluate the effectiveness of our FBA-Net, we perform extensive experiments on two challenging datasets, THUMOS14 and ActivityNet1.3. The experiments show that our FBA-Net attains superior results.
引用
收藏
页码:6939 / 6951
页数:13
相关论文
共 50 条
  • [41] ACTION RELATIONAL GRAPH FOR WEAKLY-SUPERVISED TEMPORAL ACTION LOCALIZATION
    Cheng, Yi
    Sun, Ying
    Lin, Dongyun
    Lim, Joo-Hwee
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 2563 - 2567
  • [42] Weakly-supervised temporal action localization: a survey
    AbdulRahman Baraka
    Mohd Halim Mohd Noor
    Neural Computing and Applications, 2022, 34 : 8479 - 8499
  • [43] Weakly-supervised temporal action localization: a survey
    Baraka, AbdulRahman
    Noor, Mohd Halim Mohd
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (11): : 8479 - 8499
  • [44] Weakly Supervised Temporal Action Localization by Multi-Stage Fusion Network
    Shen, Zhengyang
    Wang, Feng
    Dai, Jin
    IEEE ACCESS, 2020, 8 : 17287 - 17298
  • [45] Deep feature enhancing and selecting network for weakly supervised temporal action localization
    Yu, Jiaruo
    Ge, Yongxin
    Qin, Xiaolei
    Li, Ziqiang
    Huang, Sheng
    Chen, Feiyu
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 80
  • [46] Progressive enhancement network with pseudo labels for weakly supervised temporal action localization
    Wang, Qingyun
    Song, Yan
    Zou, Rong
    Shu, Xiangbo
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 87
  • [47] Weakly-supervised Temporal Action Localization with Adaptive Clustering and Refining Network
    Ren, Hao
    Ran, Wu
    Liu, Xingson
    Ren, Haoran
    Lu, Hong
    Zhang, Rui
    Jin, Cheng
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1008 - 1013
  • [48] Spatial-temporal correlations learning and action-background jointed attention for weakly-supervised temporal action localization
    Xia, Huifen
    Zhan, Yongzhao
    Cheng, Keyang
    MULTIMEDIA SYSTEMS, 2022, 28 (04) : 1529 - 1541
  • [49] A Novel Action Saliency and Context-Aware Network for Weakly-Supervised Temporal Action Localization
    Zhao, Yibo
    Zhang, Hua
    Gao, Zan
    Gao, Wenjie
    Wang, Meng
    Chen, Shengyong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8253 - 8266
  • [50] Exploring Sub-Action Granularity for Weakly Supervised Temporal Action Localization
    Wang, Binglu
    Zhang, Xun
    Zhao, Yongqiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (04) : 2186 - 2198