Graph-Guided Multi-view Text Classification: Advanced Solutions for Fast Inference

被引:0
|
作者
Gao, Nan [1 ]
Wang, Yongjian [1 ]
Chen, Peng [1 ]
Zheng, Xin [1 ]
机构
[1] Zhejiang Univ Technol, Hangzhou, Peoples R China
关键词
Graph Neural Network; Multi-Perspective Fusion; Remote Feature Extraction; Fast Inference;
D O I
10.1007/978-3-031-72344-5_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, traditional large-scale pre-trained models such as BERT perform well in text classification tasks. However, these models have a large number of parameters and high memory requirements, making it difficult to implement in some real-time scenarios or limited resources. Therefore, researchers attempt to use lightweight Graph Neural Networks(GNN) with excellent feature expression as an alternative solution. However, current GNN-based methods solely focus on the structure information of texts, but ignore the sequence information and long-distance dependency relationships between nodes. To solve the above problem, we propose a lightweight network structure G2TX based on multi view feature fusion, which can achieve a balance between model performance and parameters. First, to address the challenge of unordered nodes in graph structure, we introduce a Multi Sequence Fusion Module (MSF) to enhance node sequence information. It integrates features from multiple views through diverse strategies for both word-level and text-level fusion. Secondly, to expand the receptive field of nodes, we propose a Remote Feature Extraction Module (RFE) to bridge the difficult interaction gap between word nodes and remote nodes. Finally, we use KL divergence to integrate the features of both MSF and RFE. The experimental results demonstrate that our model achieves state-of-the-art performance under smaller parameter settings and fast inference conditions.
引用
收藏
页码:126 / 142
页数:17
相关论文
共 50 条
  • [1] Pure graph-guided multi-view subspace clustering
    Wu, Hongjie
    Huang, Shudong
    Tang, Chenwei
    Zhang, Yancheng
    Lv, Jiancheng
    PATTERN RECOGNITION, 2023, 136
  • [2] Graph-guided imputation-free incomplete multi-view clustering
    Bai, Shunshun
    Zheng, Qinghai
    Ren, Xiaojin
    Zhu, Jihua
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 258
  • [3] Fast Multi-view Graph Kernels for Object Classification
    Zhang, Luming
    Song, Mingli
    Bu, Jiajun
    Chen, Chun
    AI 2011: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2011, 7106 : 570 - 579
  • [4] Fast multi-view segment graph kernel for object classification
    Zhang, Luming
    Song, Mingli
    Liu, Xiao
    Bu, Jiajun
    Chen, Chun
    SIGNAL PROCESSING, 2013, 93 (06) : 1597 - 1607
  • [5] No Matter Where You Are: Flexible Graph-guided Multi-task Learning for Multi-view Head Pose Classification under Target Motion
    Yan, Yan
    Ricci, Elisa
    Subramanian, Ramanathan
    Lanz, Oswald
    Sebe, Nicu
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 1177 - 1184
  • [6] Multi-view Graph-Based Text Representations for Imbalanced Classification
    Karajeh, Ola
    Lourentzou, Ismini
    Fox, Edward A.
    LINKING THEORY AND PRACTICE OF DIGITAL LIBRARIES, TPDL 2023, 2023, 14241 : 249 - 264
  • [7] Evolving Multi-view Autoencoders for Text Classification
    Ha, Tuan
    Gao, Xiaoying
    2021 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY (WI-IAT 2021), 2021, : 270 - 276
  • [8] Multi-View Robust Graph Representation Learning for Graph Classification
    Ma, Guanghui
    Hu, Chunming
    Ge, Ling
    Zhang, Hong
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4037 - 4045
  • [9] Multi-View Guided Multi-View Stereo
    Poggi, Matteo
    Conti, Andrea
    Mattoccia, Stefano
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 8391 - 8398
  • [10] Fast Multi-View Clustering via Prototype Graph
    Shi, Shaojun
    Nie, Feiping
    Wang, Rong
    Li, Xuelong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 443 - 455