Semantic2Graph: graph-based multi-modal feature fusion for action segmentation in videos

被引:1
|
作者
Zhang, Junbin [1 ]
Tsai, Pei-Hsuan [2 ]
Tsai, Meng-Hsun [1 ,3 ]
机构
[1] Natl Cheng Kung Univ, Dept Comp Sci & Informat Engn, Tainan 701, Taiwan
[2] Natl Cheng Kung Univ, Inst Mfg Informat & Syst, Tainan 701, Taiwan
[3] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu 300, Taiwan
关键词
Video action segmentation; Graph neural networks; Computer vision; Semantic features; Multi-modal fusion; CONVOLUTIONAL NETWORK; LOCALIZATION; ATTENTION;
D O I
10.1007/s10489-023-05259-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video action segmentation have been widely applied in many fields. Most previous studies employed video-based vision models for this purpose. However, they often rely on a large receptive field, LSTM or Transformer methods to capture long-term dependencies within videos, leading to significant computational resource requirements. To address this challenge, graph-based model was proposed. However, previous graph-based models are less accurate. Hence, this study introduces a graph-structured approach named Semantic2Graph, to model long-term dependencies in videos, thereby reducing computational costs and raise the accuracy. We construct a graph structure of video at the frame-level. Temporal edges are utilized to model the temporal relations and action order within videos. Additionally, we have designed positive and negative semantic edges, accompanied by corresponding edge weights, to capture both long-term and short-term semantic relationships in video actions. Node attributes encompass a rich set of multi-modal features extracted from video content, graph structures, and label text, encompassing visual, structural, and semantic cues. To synthesize this multi-modal information effectively, we employ a graph neural network (GNN) model to fuse multi-modal features for node action label classification. Experimental results demonstrate that Semantic2Graph outperforms state-of-the-art methods in terms of performance, particularly on benchmark datasets such as GTEA and 50Salads. Multiple ablation experiments further validate the effectiveness of semantic features in enhancing model performance. Notably, the inclusion of semantic edges in Semantic2Graph allows for the cost-effective capture of long-term dependencies, affirming its utility in addressing the challenges posed by computational resource constraints in video-based vision models.
引用
收藏
页码:2084 / 2099
页数:16
相关论文
共 50 条
  • [1] Semantic2Graph: graph-based multi-modal feature fusion for action segmentation in videos
    Junbin Zhang
    Pei-Hsuan Tsai
    Meng-Hsun Tsai
    Applied Intelligence, 2024, 54 : 2084 - 2099
  • [2] Flexible Multi-modal Graph-Based Segmentation
    Sanberg, Willem P.
    Do, Luat
    de With, Peter H. N.
    ADVANCED CONCEPTS FOR INTELLIGENT VISION SYSTEMS, ACIVS 2013, 2013, 8192 : 492 - 503
  • [3] Leveraging multi-modal fusion for graph-based image annotation
    Amiri, S. Hamid
    Jamzad, Mansour
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 55 : 816 - 828
  • [4] Graph-Based Multi-Modal Multi-View Fusion for Facial Action Unit Recognition
    Chen, Jianrong
    Dey, Sujit
    IEEE ACCESS, 2024, 12 : 69310 - 69324
  • [5] Representation and Fusion Based on Knowledge Graph in Multi-Modal Semantic Communication
    Xing, Chenlin
    Lv, Jie
    Luo, Tao
    Zhang, Zhilong
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (05) : 1344 - 1348
  • [6] A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation
    Yin, Yongjing
    Meng, Fandong
    Su, Jinsong
    Zhou, Chulun
    Yang, Zhengyuan
    Zhou, Jie
    Luo, Jiebo
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3025 - 3035
  • [7] Graph-Based Semantic Segmentation
    Balaska, Vasiliki
    Bampis, Loukas
    Gasteratos, Antonios
    ADVANCES IN SERVICE AND INDUSTRIAL ROBOTICS, RAAD 2018, 2019, 67 : 572 - 579
  • [8] Multi-modal Action Segmentation in the Kitchen with a Feature Fusion Approach
    Kogure, Shunsuke
    Aoki, Yoshimitsu
    FIFTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION, 2021, 11794
  • [9] GRAPH-BASED MULTI-MODAL SCENE DETECTION FOR MOVIE AND TELEPLAY
    Xu, Su
    Feng, Bailan
    Ding, Peng
    Xu, Bo
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 1413 - 1416
  • [10] Portable graph-based rumour detection against multi-modal heterophily
    Nguyen, Thanh Tam
    Ren, Zhao
    Nguyen, Thanh Toan
    Jo, Jun
    Nguyen, Quoc Viet Hung
    Yin, Hongzhi
    KNOWLEDGE-BASED SYSTEMS, 2024, 284