AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities

被引:2
|
作者
Zhang, Jingdan [1 ]
Wang, Jiaan [2 ]
Wang, Xiaodan [1 ]
Li, Zhixu [1 ]
Xiao, Yanghua [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Data Sci, Shanghai, Peoples R China
[2] Soochow Univ, Sch Comp Sci & Technol, Suzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge Graph; Multi-modal Knowledge Graph; Image Retrieval;
D O I
10.1145/3583780.3614782
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-modal knowledge graphs (MMKGs) combine different modal data (e.g., text and image) for a comprehensive understanding of entities. Despite the recent progress of large-scale MMKGs, existing MMKGs neglect the multi-aspect nature of entities, limiting the ability to comprehend entities from various perspectives. In this paper, we construct AspectMMKG, the first MMKG with aspect-related images by matching images to different entity aspects. Specifically, we collect aspect-related images from a knowledge base, and further extract aspect-related sentences from the knowledge base as queries to retrieve a large number of aspect-related images via an online image search engine. Finally, AspectMMKG contains 2,380 entities, 18,139 entity aspects, and 645,383 aspect-related images. We demonstrate the usability of AspectMMKG in entity aspect linking (EAL) downstream task and show that previous EAL models achieve a new state-of-the-art performance with the help of AspectMMKG. To facilitate the research on aspect-related MMKG, we further propose an aspect-related image retrieval (AIR) model, that aims to correct and expand aspect-related images in AspectMMKG. We train an AIR model to learn the relationship between entity image and entity aspect-related images by incorporating entity image, aspect, and aspect image information. Experimental results indicate that the AIR model could retrieve suitable images for a given entity w.r.t different aspects.
引用
收藏
页码:3361 / 3370
页数:10
相关论文
共 50 条
  • [1] Beta Distribution Guided Aspect-aware Graph for Aspect Category Sentiment Analysis with Affective Knowledge
    Liang, Bin
    Su, Hang
    Yin, Rongdi
    Gui, Lin
    Yang, Min
    Zhao, Qin
    Yu, Xiaoqi
    Xu, Ruifeng
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 208 - 218
  • [2] Modality-Aware Negative Sampling for Multi-modal Knowledge Graph Embedding
    Zhang, Yichi
    Chen, Mingyang
    Zhang, Wen
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Boosting Entity-Aware Image Captioning With Multi-Modal Knowledge Graph
    Zhao, Wentian
    Wu, Xinxiao
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2659 - 2670
  • [4] Beyond Entities: A Large-Scale Multi-Modal Knowledge Graph with Triplet Fact Grounding
    Liu, Jingping
    Zhang, Mingchuan
    Li, Weichen
    Wang, Chao
    Li, Shuang
    Jiang, Haiyun
    Jiang, Sihang
    Xiao, Yanghua
    Chen, Yunwen
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18653 - 18661
  • [5] Richpedia: A Comprehensive Multi-modal Knowledge Graph
    Wang, Meng
    Qi, Guilin
    Wang, Haofen
    Zheng, Qiushuo
    [J]. SEMANTIC TECHNOLOGY, JIST 2019: PROCEEDINGS, 2020, 12032 : 130 - 145
  • [6] What Is a Multi-Modal Knowledge Graph: A Survey
    Peng, Jinghui
    Hu, Xinyu
    Huang, Wenbo
    Yang, Jian
    [J]. BIG DATA RESEARCH, 2023, 32
  • [7] MultiJAF: Multi-modal joint entity alignment framework for multi-modal knowledge graph
    Cheng, Bo
    Zhu, Jia
    Guo, Meimei
    [J]. NEUROCOMPUTING, 2022, 500 : 581 - 591
  • [8] Unified QA-aware Knowledge Graph Generation Based on Multi-modal Modeling
    Qin, Penggang
    Yu, Jiarui
    Gao, Yan
    Xu, Derong
    Chen, Yunkai
    Wu, Shiwei
    Xu, Tong
    Chen, Enhong
    Hao, Yanbin
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 7185 - 7189
  • [9] Triplet-aware graph neural networks for factorized multi-modal knowledge graph entity alignment
    Li, Qian
    Li, Jianxin
    Wu, Jia
    Peng, Xutan
    Ji, Cheng
    Peng, Hao
    Wang, Lihong
    Yu, Philip S.
    [J]. NEURAL NETWORKS, 2024, 179
  • [10] Contrastive Multi-Modal Knowledge Graph Representation Learning
    Fang, Quan
    Zhang, Xiaowei
    Hu, Jun
    Wu, Xian
    Xu, Changsheng
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 8983 - 8996