Function-level Vulnerability Detection Through Fusing Multi-Modal Knowledge

被引:1
|
作者
Ni, Chao [1 ]
Guo, Xinrong [1 ]
Zhu, Yan [1 ]
Xu, Xiaodan [1 ]
Yang, Xiaohu [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Vulnerability Detection; Computer Vision; Deep Learning; Multi-Modal Code Representations;
D O I
10.1109/ASE56229.2023.00084
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Software vulnerabilities damage the functionality of software systems. Recently, many deep learning-based approaches have been proposed to detect vulnerabilities at the function level by using one or a few different modalities (e.g., text representation, graph-based representation) of the function and have achieved promising performance. However, some of these existing studies have not completely leveraged these diverse modalities, particularly the underutilized image modality, and the others using images to represent functions for vulnerability detection have not made adequate use of the significant graph structure underlying the images. In this paper, we propose MVulD, a multi-modal-based function-level vulnerability detection approach, which utilizes multi-modal features of the function (i.e., text representation, graph representation, and image representation) to detect vulnerabilities. Specifically, MVulD utilizes a pre-trained model (i.e., UniXcoder) to learn the semantic information of the textual source code, employs the graph neural network to distill graph-based representation, and makes use of computer vision techniques to obtain the image representation while retaining the graph structure of the function. We conducted a large-scale experiment on 25,816 functions. The experimental results show that MVulD improves four state-of-the-art baselines by 30.8%-81.3%, 12.8%-27.4%, 48.8%-115%, and 22.9%-141% in terms of F1-score, Accuracy, Precision, and PR-AUC respectively.
引用
收藏
页码:1911 / 1918
页数:8
相关论文
共 50 条
  • [31] Multi-modal novelty and familiarity detection
    Christo Panchev
    BMC Neuroscience, 14 (Suppl 1)
  • [32] Multi-Modal Depression Detection and Estimation
    Yang, Le
    2019 8TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2019, : 26 - 30
  • [33] Richpedia: A Comprehensive Multi-modal Knowledge Graph
    Wang, Meng
    Qi, Guilin
    Wang, Haofen
    Zheng, Qiushuo
    SEMANTIC TECHNOLOGY, JIST 2019: PROCEEDINGS, 2020, 12032 : 130 - 145
  • [34] Knowledge Synergy Learning for Multi-Modal Tracking
    He, Yuhang
    Ma, Zhiheng
    Wei, Xing
    Gong, Yihong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5519 - 5532
  • [35] What Is a Multi-Modal Knowledge Graph: A Survey
    Peng, Jinghui
    Hu, Xinyu
    Huang, Wenbo
    Yang, Jian
    BIG DATA RESEARCH, 2023, 32
  • [36] Multi-modal nonmonotonic logics of minimal knowledge
    Riccardo Rosati
    Annals of Mathematics and Artificial Intelligence, 2006, 48 : 169 - 185
  • [37] Knowledge Enhanced Vision and Language Model for Multi-Modal Fake News Detection
    Gao, Xingyu
    Wang, Xi
    Chen, Zhenyu
    Zhou, Wei
    Hoi, Steven C. H.
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8312 - 8322
  • [38] Multi-modal Knowledge Graphs for Recommender Systems
    Sun, Rui
    Cao, Xuezhi
    Zhao, Yan
    Wan, Junchen
    Zhou, Kun
    Zhang, Fuzheng
    Wang, Zhongyuan
    Zheng, Kai
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 1405 - 1414
  • [39] Multi-modal nonmonotonic logics of minimal knowledge
    Rosati, Riccardo
    ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 2006, 48 (3-4) : 169 - 185
  • [40] Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
    Xu, Yifan
    Zhang, Mengdan
    Yang, Xiaoshan
    Xu, Changsheng
    IEEE Transactions on Image Processing, 2024, 33 : 6253 - 6267