Fusing heterogeneous information for multi-modal attributed network embedding

被引:0
|
作者
Yang Jieyi
Zhu Feng
Dong Yihong
Qian Jiangbo
机构
[1] Ningbo University,Faculty of electrical engineering and computer science
来源
Applied Intelligence | 2023年 / 53卷
关键词
Multimodal attributed network; Heterogeneous network; Graph embedding; Graph neural network;
D O I
暂无
中图分类号
学科分类号
摘要
In the real world, networks with many types of nodes and edges are complex, forming a heterogeneous network. For instance, film networks contain different node types, such as directors, films and actors, as well as different types of edge and multimodal attributes. Most existing attribution network embedding algorithms cannot flexibly capture the impact of multimodal attributes on the topology. Premature fusion of multimodal features encodes different attribute information into the representation embedding, while the later fusion strategy ignores the interaction between different modes, both of which affect the modeling of graph embedding.To solve this problem, we propose a multimodal attribute network representation learning algorithm based on heterogeneity information fusion, named FHIANE. It extracts features from multimodal information sources through deep heterogeneous convolutional networks and projects them into a consistent semantic space while maintaining structural information. In addition, we design a modality fusion network based on an extended attention mechanism that takes full advantage of the consistency and complementarity of multimodal information. We evaluate the performance of the FHIANE algorithm on several real datasets through challenging tasks such as link prediction and node classification. The experimental results show that FHIANE outperforms other baselines.
引用
收藏
页码:22328 / 22347
页数:19
相关论文
共 50 条
  • [31] SELF-AUGMENTED MULTI-MODAL FEATURE EMBEDDING
    Matsuo, Shinnosuke
    Uchida, Seiichi
    Iwana, Brian Kenji
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3995 - 3999
  • [32] Multi-Modal Sarcasm Detection with Sentiment Word Embedding
    Fu, Hao
    Liu, Hao
    Wang, Hongling
    Xu, Linyan
    Lin, Jiali
    Jiang, Dazhi
    [J]. ELECTRONICS, 2024, 13 (05)
  • [33] Multi-Modal Embedding for Main Product Detection in Fashion
    Rubio, Antonio
    Yu, LongLong
    Simo-Serra, Edgar
    Moreno-Noguer, Francesc
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 2236 - 2242
  • [34] MULTI-MODAL TRAVEL INFORMATION ON THE WEB
    Pun-Cheng, Lilian S. C.
    Shea, Geoffrey Y. K.
    Mok, Esmond C. M.
    [J]. TRANSPORTATION AND LOGISTICS, 2003, : 285 - 290
  • [35] Multi-Modal Component Embedding for Fake News Detection
    Kang, SeongKu
    Hwang, Junyoung
    Yu, Hwanjo
    [J]. PROCEEDINGS OF THE 2020 14TH INTERNATIONAL CONFERENCE ON UBIQUITOUS INFORMATION MANAGEMENT AND COMMUNICATION (IMCOM), 2020,
  • [36] Key Player Identification in Underground Forums over Attributed Heterogeneous Information Network Embedding Framework
    Zhang, Yiming
    Fan, Yujie
    Ye, Yanfang
    Zhao, Liang
    Shi, Chuan
    [J]. PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, : 549 - 558
  • [37] Lifelong Learning for Heterogeneous Multi-Modal Tasks
    Liu, Huaping
    Sun, Fuchun
    Fang, Bin
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 6158 - 6164
  • [38] Multi-modal lung ultrasound image classification by fusing image-based features and probe information
    Okolo, Gabriel Iluebe
    Katsigiannis, Stamos
    Ramzan, Naeem
    [J]. 2022 IEEE 22ND INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE 2022), 2022, : 45 - 50
  • [39] The Stability of Multi-modal Traffic Network
    Han Ling-Hui
    Sun Hui-Jun
    Zhu Cheng-Juan
    Wu Jian-Jun
    Jia Bin
    [J]. COMMUNICATIONS IN THEORETICAL PHYSICS, 2013, 60 (01) : 48 - 54
  • [40] The Stability of Multi-modal Traffic Network
    韩凌辉
    孙会君
    朱成娟
    吴建军
    贾斌
    [J]. Communications in Theoretical Physics, 2013, 60 (07) : 48 - 54