Video Joint Modelling Based on Hierarchical Transformer for Co-Summarization

被引:14
|
作者
Li, Haopeng [1 ]
Ke, Qiuhong [2 ]
Gong, Mingming [3 ]
Zhang, Rui [4 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Parkville, Vic 3010, Australia
[2] Monash Univ, Dept Data Sci & AI, Parkville, Vic 3010, Australia
[3] Univ Melbourne, Sch Math & Stat, Parkville, Vic 3010, Australia
[4] Tsinghua Univ, Beijing 100190, Peoples R China
关键词
Transformers; Semantics; Correlation; Computational modeling; Training; Task analysis; Video on demand; Video summarization; co-summarization; hierarchical transformer; representation reconstruction;
D O I
10.1109/TPAMI.2022.3186506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video summarization aims to automatically generate a summary (storyboard or video skim) of a video, which can facilitate large-scale video retrieval and browsing. Most of the existing methods perform video summarization on individual videos, which neglects the correlations among similar videos. Such correlations, however, are also informative for video understanding and video summarization. To address this limitation, we propose Video Joint Modelling based on Hierarchical Transformer (VJMHT) for co-summarization, which takes into consideration the semantic dependencies across videos. Specifically, VJMHT consists of two layers of Transformer: the first layer extracts semantic representation from individual shots of similar videos, while the second layer performs shot-level video joint modelling to aggregate cross-video semantic information. By this means, complete cross-video high-level patterns are explicitly modelled and learned for the summarization of individual videos. Moreover, Transformer-based video representation reconstruction is introduced to maximize the high-level similarity between the summary and the original video. Extensive experiments are conducted to verify the effectiveness of the proposed modules and the superiority of VJMHT in terms of F-measure and rank-based evaluation.
引用
收藏
页码:3904 / 3917
页数:14
相关论文
共 50 条
  • [11] Video Summarization With Spatiotemporal Vision Transformer
    Hsu, Tzu-Chun
    Liao, Yi-Sheng
    Huang, Chun-Rong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3013 - 3026
  • [12] Hierarchical Video Summarization in Reference Subspace
    Jiang, Richard M.
    Sadka, Abdul H.
    Crookes, Danny
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2009, 55 (03) : 1551 - 1557
  • [13] Hierarchical video summarization for medical data
    Zhu, XQ
    Fan, JP
    Elmagarmid, AK
    Aref, WG
    STORAGE AND RETRIEVAL FOR MEDIA DATABASES 2002, 2002, 4676 : 395 - 406
  • [14] Hierarchical Video Summarization with Loitering Indication
    Lu, Ruipeng
    Yang, Hua
    Zhu, Ji
    Wu, Shuang
    Wang, Jia
    Bull, David
    2015 VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2015,
  • [15] Hierarchical Summarization for Easy Video Applications
    Sudhakar, Nithya
    Chandran, Sharat
    2015 14TH IAPR INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA), 2015, : 69 - 72
  • [16] Hierarchical shot clustering for video summarization
    Choi, Y
    Kim, SJ
    Lee, S
    COMPUTATIONAL SCIENCE-ICCS 2002, PT III, PROCEEDINGS, 2002, 2331 : 1100 - 1107
  • [17] Video summarization with u-shaped transformer
    Chen, Yaosen
    Guo, Bing
    Shen, Yan
    Zhou, Renshuang
    Lu, Weichen
    Wang, Wei
    Wen, Xuming
    Suo, Xinhua
    APPLIED INTELLIGENCE, 2022, 52 (15) : 17864 - 17880
  • [18] Video Summarization With Frame Index Vision Transformer
    Hsu, Tzu-Chun
    Liao, Yi-Sheng
    Huang, Chun-Rong
    PROCEEDINGS OF 17TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA 2021), 2021,
  • [19] Video summarization with u-shaped transformer
    Yaosen Chen
    Bing Guo
    Yan Shen
    Renshuang Zhou
    Weichen Lu
    Wei Wang
    Xuming Wen
    Xinhua Suo
    Applied Intelligence, 2022, 52 : 17864 - 17880
  • [20] Hie-Transformer: A Hierarchical Hybrid Transformer for Abstractive Article Summarization
    Zhang, Xuewen
    Meng, Kui
    Liu, Gongshen
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT III, 2019, 11955 : 248 - 258