Text-Video Retrieval via Multi-Modal Hypergraph Networks

被引:1
|
作者
Li, Qian [1 ]
Su, Lixin [1 ]
Zhao, Jiashu [2 ]
Xia, Long [1 ]
Cai, Hengyi [3 ]
Cheng, Suqi [1 ]
Tang, Hengzhu [1 ]
Wang, Junfeng [1 ]
Yin, Dawei [1 ]
机构
[1] Baidu Inc, Beijing, Peoples R China
[2] Wilfrid Laurier Univ, Waterloo, ON, Canada
[3] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
关键词
text-video retrieval; multi-modal hypergraph; hypergraph neural networks;
D O I
10.1145/3616855.3635757
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Text-video retrieval is a challenging task that aims to identify relevant videos given textual queries. Compared to conventional textual retrieval, the main obstacle for text-video retrieval is the semantic gap between the textual nature of queries and the visual richness of video content. Previous works primarily focus on aligning the query and the video by finely aggregating word-frame matching signals. Inspired by the human cognitive process of modularly judging the relevance between text and video, the judgment needs high-order matching signal due to the consecutive and complex nature of video contents. In this paper, we propose chunk-level text-video matching, where the query chunks are extracted to describe a specific retrieval unit, and the video chunks are segmented into distinct clips from videos. We formulate the chunk-level matching as n-ary correlations modeling between words of the query and frames of the video and introduce a multi-modal hypergraph for n-ary correlation modeling. By representing textual units and video frames as nodes and using hyperedges to depict their relationships, a multimodal hypergraph is constructed. In this way, the query and the video can be aligned in a high-order semantic space. In addition, to enhance the model's generalization ability, the extracted features are fed into a variational inference component for computation, obtaining the variational representation under the Gaussian distribution. The incorporation of hypergraphs and variational inference allows our model to capture complex, n-ary interactions among textual and visual contents. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on the text-video retrieval task.
引用
收藏
页码:369 / 377
页数:9
相关论文
共 50 条
  • [41] Sinkhorn Transformations for Single-Query Postprocessing in Text-Video Retrieval
    Yakovlev, Konstantin
    Polyakov, Gregory
    Alimova, Ilseyar
    Podolskiy, Alexander
    Bout, Andrey
    Nikolenko, Sergey
    Piontkovskaya, Irina
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 2394 - 2398
  • [42] Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning
    Jiang, Chen
    Liu, Hong
    Yu, Xuzheng
    Wang, Qing
    Cheng, Yuan
    Xu, Jia
    Liu, Zhongyi
    Guo, Qingpei
    Chu, Wei
    Yang, Ming
    Qi, Yuan
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4626 - 4636
  • [43] Multi-modal Video Retrieval in Virtual Reality with vitrivr-VR
    Spiess, Florian
    Gasser, Ralph
    Heller, Silvan
    Parian-Scherb, Mahnaz
    Rossetto, Luca
    Sauter, Loris
    Schuldt, Heiko
    MULTIMEDIA MODELING, MMM 2022, PT II, 2022, 13142 : 499 - 504
  • [44] An Interactive Video Search Platform for Multi-modal Retrieval with Advanced Concepts
    Nguyen-Khang Le
    Dieu-Hien Nguyen
    Minh-Triet Tran
    MULTIMEDIA MODELING (MMM 2020), PT II, 2020, 11962 : 766 - 771
  • [45] Multi-modal Video Summarization
    Huang, Jia-Hong
    ICMR 2024 - Proceedings of the 2024 International Conference on Multimedia Retrieval, 2024, : 1214 - 1218
  • [46] Multi-modal Video Summarization
    Huang, Jia-Hong
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 1214 - 1218
  • [47] Mind-the-Gap! Unsupervised Domain Adaptation for Text-Video Retrieval
    Chen, Qingchao
    Liu, Yang
    Albanie, Samuel
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1072 - 1080
  • [48] Hadamard matrix-guided multi-modal hashing for multi-modal retrieval
    Yu, Jun
    Huang, Wei
    Li, Zuhe
    Shu, Zhenqiu
    Zhu, Liang
    DIGITAL SIGNAL PROCESSING, 2022, 130
  • [49] In-Style: Bridging Text and Uncurated Videos with Style Transfer for Text-Video Retrieval
    Shvetsova, Nina
    Kukleva, Anna
    Schiele, Bernt
    Kuehne, Hilde
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21924 - +
  • [50] Multi-modal molecule structure-text model for text-based retrieval and editing
    Liu, Shengchao
    Nie, Weili
    Wang, Chengpeng
    Lu, Jiarui
    Qiao, Zhuoran
    Liu, Ling
    Tang, Jian
    Xiao, Chaowei
    Anandkumar, Animashree
    NATURE MACHINE INTELLIGENCE, 2023, 5 (12) : 1447 - 1457