Multi-modal Multi-relational Feature Aggregation Network for Medical Knowledge Representation Learning

被引:8
|
作者
Zhang, Yingying [1 ]
Fang, Quan [1 ]
Qian, Shengsheng [1 ]
Xu, Changsheng [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, Univ Chinese Acad Sci, Beijing, Peoples R China
[2] Peng Cheng Lab, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
knowledge graph; heterogeneous graph; attention mechanism;
D O I
10.1145/3394171.3413736
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation learning of medical Knowledge Graph (KG) is an important task and forms the fundamental process for intelligent medical applications such as disease diagnosis and healthcare question answering. Therefore, many embedding models have been proposed to learn vector presentations for entities and relations but they ignore three important properties of medical KG: multi-modal, unbalanced and heterogeneous. Entities in the medical KG can carry unstructured multi-modal content, such as image and text. At the same time, the knowledge graph consists of multiple types of entities and relations, and each entity has various number of neighbors. In this paper, we propose a Multi-modal Multi-Relational Feature Aggregation Network (MMRFAN) for medical knowledge representation learning. To deal with the multi-modal content of the entity, we propose an adversarial feature learning model to map the textual and image information of the entity into the same vector space and learn the multi-modal common representation. To better capture the complex structure and rich semantics, we design a sampling mechanism and aggregate the neighbors with intra and inter-relation attention. We evaluate our model on three knowledge graphs, including FB15k-237, IMDb and Symptoms-in-Chinese with link prediction and node classification tasks. Experimental results show that our approach outperforms state-of-the-art method.
引用
收藏
页码:3956 / 3965
页数:10
相关论文
共 50 条
  • [21] MMKRL: A robust embedding approach for multi-modal knowledge graph representation learning
    Lu, Xinyu
    Wang, Lifang
    Jiang, Zejun
    He, Shichang
    Liu, Shizhong
    APPLIED INTELLIGENCE, 2022, 52 (07) : 7480 - 7497
  • [22] MMKRL: A robust embedding approach for multi-modal knowledge graph representation learning
    Xinyu Lu
    Lifang Wang
    Zejun Jiang
    Shichang He
    Shizhong Liu
    Applied Intelligence, 2022, 52 : 7480 - 7497
  • [23] Knowledge Synergy Learning for Multi-Modal Tracking
    He, Yuhang
    Ma, Zhiheng
    Wei, Xing
    Gong, Yihong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5519 - 5532
  • [24] OctopusNet: A Deep Learning Segmentation Network for Multi-modal Medical Images
    Chen, Yu
    Chen, Jiawei
    Wei, Dong
    Li, Yuexiang
    Zheng, Yefeng
    MULTISCALE MULTIMODAL MEDICAL IMAGING, MMMI 2019, 2020, 11977 : 17 - 25
  • [25] Multi-modal knowledge graphs representation learning via multi-headed self-attention
    Wang, Enqiang
    Yu, Qing
    Chen, Yelin
    Slamu, Wushouer
    Luo, Xukang
    INFORMATION FUSION, 2022, 88 : 78 - 85
  • [26] Multi-Relational Graph Representation Learning for Financial Statement Fraud Detection
    Wang, Chenxu
    Wang, Mengqin
    Wang, Xiaoguang
    Zhang, Luyue
    Long, Yi
    BIG DATA MINING AND ANALYTICS, 2024, 7 (03): : 920 - 941
  • [27] Multi-relational EHR representation learning with infusing information of Diagnosis and Medication
    Shi, Yu
    Guo, Yuhang
    Wu, Hao
    Li, Jingxiu
    Li, Xin
    2021 IEEE 45TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2021), 2021, : 1617 - 1622
  • [28] Fast Multi-Modal Unified Sparse Representation Learning
    Verma, Mridula
    Shukla, Kaushal Kumar
    PROCEEDINGS OF THE 2017 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR'17), 2017, : 448 - 452
  • [29] Multi-modal Representation Learning for Successive POI Recommendation
    Li, Lishan
    Liu, Ying
    Wu, Jianping
    He, Lin
    Ren, Gang
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 441 - 456
  • [30] Joint Representation Learning for Multi-Modal Transportation Recommendation
    Liu, Hao
    Li, Ting
    Hu, Renjun
    Fu, Yanjie
    Gu, Jingjing
    Xiong, Hui
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1036 - 1043