Function-level Vulnerability Detection Through Fusing Multi-Modal Knowledge

被引:1
|
作者
Ni, Chao [1 ]
Guo, Xinrong [1 ]
Zhu, Yan [1 ]
Xu, Xiaodan [1 ]
Yang, Xiaohu [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Vulnerability Detection; Computer Vision; Deep Learning; Multi-Modal Code Representations;
D O I
10.1109/ASE56229.2023.00084
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Software vulnerabilities damage the functionality of software systems. Recently, many deep learning-based approaches have been proposed to detect vulnerabilities at the function level by using one or a few different modalities (e.g., text representation, graph-based representation) of the function and have achieved promising performance. However, some of these existing studies have not completely leveraged these diverse modalities, particularly the underutilized image modality, and the others using images to represent functions for vulnerability detection have not made adequate use of the significant graph structure underlying the images. In this paper, we propose MVulD, a multi-modal-based function-level vulnerability detection approach, which utilizes multi-modal features of the function (i.e., text representation, graph representation, and image representation) to detect vulnerabilities. Specifically, MVulD utilizes a pre-trained model (i.e., UniXcoder) to learn the semantic information of the textual source code, employs the graph neural network to distill graph-based representation, and makes use of computer vision techniques to obtain the image representation while retaining the graph structure of the function. We conducted a large-scale experiment on 25,816 functions. The experimental results show that MVulD improves four state-of-the-art baselines by 30.8%-81.3%, 12.8%-27.4%, 48.8%-115%, and 22.9%-141% in terms of F1-score, Accuracy, Precision, and PR-AUC respectively.
引用
收藏
页码:1911 / 1918
页数:8
相关论文
共 50 条
  • [1] Reactor Power Level Estimation by Fusing Multi-Modal Sensor Measurements
    Rao, Nageswara S., V
    Greulich, Christopher
    Sen, Satyabrata
    Dayman, Kenneth J.
    Hite, Jason
    Ray, Will
    Hale, Richard
    Nicholson, Andrew D.
    Johnson, Jared
    Hunley, Riley D.
    Maceira, Monica
    Chai, Chengping
    Marcillo, Omar
    Karnowski, Tom
    Wetherington, Randall
    PROCEEDINGS OF 2020 23RD INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2020), 2020, : 857 - 864
  • [2] A Context-Aware Neural Embedding for Function-Level Vulnerability Detection
    Wei, Hongwei
    Lin, Guanjun
    Li, Lin
    Jia, Heming
    ALGORITHMS, 2021, 14 (11)
  • [3] Fusing Multi-modal Features for Gesture Recognition
    Wu, Jiaxiang
    Cheng, Jian
    Zhao, Chaoyang
    Lu, Hanqing
    ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 453 - 459
  • [4] Multi-modal news event detection with external knowledge
    Lin, Zehang
    Xie, Jiayuan
    Li, Qing
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (03)
  • [5] Multi-Modal Emotion Recognition Fusing Video and Audio
    Xu, Chao
    Du, Pufeng
    Feng, Zhiyong
    Meng, Zhaopeng
    Cao, Tianyi
    Dong, Caichao
    APPLIED MATHEMATICS & INFORMATION SCIENCES, 2013, 7 (02): : 455 - 462
  • [6] CROSS-MODAL KNOWLEDGE DISTILLATION IN MULTI-MODAL FAKE NEWS DETECTION
    Wei, Zimian
    Pan, Hengyue
    Qiao, Linbo
    Niu, Xin
    Dong, Peijie
    Li, Dongsheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4733 - 4737
  • [7] Multi-level Interaction Network for Multi-Modal Rumor Detection
    Zou, Ting
    Qian, Zhong
    Li, Peifeng
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [8] Fusing Pressure-Sensitive Mat Data with Video through Multi-Modal Registration
    Kyrollos, Daniel G.
    Hassan, Randa
    Dosso, Yasmina Souley
    Green, James R.
    2021 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE (I2MTC 2021), 2021,
  • [9] Multi-modal recommendation algorithm fusing visual and textual features
    Hu, Xuefeng
    Yu, Wenting
    Wu, Yun
    Chen, Yukang
    PLOS ONE, 2023, 18 (06):
  • [10] Multi-modal emotion identification fusing facial expression and EEG
    Wu, Yongzhen
    Li, Jinhua
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (07) : 10901 - 10919