Hierarchical Attention Network for Interpretable and Fine-Grained Vulnerability Detection

被引:1
|
作者
Gu, Mianxue [1 ,3 ]
Feng, Hantao [2 ,3 ]
Sun, Hongyu [2 ,3 ]
Liu, Peng [4 ]
Yue, Qiuling [1 ]
Hu, Jinglu [5 ]
Cao, Chunjie [1 ]
Zhang, Yuqing [1 ,2 ,3 ]
机构
[1] Hainan Univ, Sch Cyberspace Secur, Haikou, Hainan, Peoples R China
[2] Xidian Univ, Coll Cyber Engn, Xian, Peoples R China
[3] Univ Acad Sci, Natl Comp Network Intrus Protect Ctr, Beijing, Peoples R China
[4] Penn State Univ, Coll Informat Sci & Technol, State Coll, PA USA
[5] Waseda Univ, Grad Sch Informat Prod & Syst, Tokyo, Japan
基金
中国国家自然科学基金;
关键词
vulnerability detection; abstract syntax tree; hierarchical attention network; deep learning;
D O I
10.1109/INFOCOMWKSHPS54753.2022.9798297
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of software technology, the number of vulnerabilities is proliferating, which makes vulnerability detection an important topic of security research. Existing works only focus on predicting whether a given program code is vulnerable but less interpretable. To overcome these deficits, we first apply the hierarchical attention network into vulnerability detection for interpretable and fine-grained vulnerability discovery. Especially, our model consists of two level attention layers at both the line-level and the token-level of the code to locate which lines or tokens are important to discover vulnerabilities. Furthermore, in order to accurately extract features from source code, we process the code based on the abstract syntax tree and embed the syntax tokens into vectors. We evaluate the performance of our model on two widely used benchmark datasets, CWE-119 (Buffer Error) and CWE399 (Resource Management Error) from SARD. Experiments show that the F1 score of our model achieves 86.1% (CWE-119) and 90.0% (CWE-399) on two datasets, which is significantly better than the-state-of-the-art models. In particular, our model can directly mark the importance of different lines and different tokens, which can provide useful information for further vulnerability exploitation and repair.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Vulnerability Detection with Fine-Grained Interpretations
    Li, Yi
    Wang, Shaohua
    Nguyen, Tien N.
    [J]. PROCEEDINGS OF THE 29TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '21), 2021, : 292 - 303
  • [2] DMRAN: A Hierarchical Fine-Grained Attention-Based Network for Recommendation
    Wang, Huizhao
    Liu, Guanfeng
    Liu, An
    Li, Zhixu
    Zheng, Kai
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3698 - 3704
  • [3] Hierarchical Attention Network for Open-Set Fine-Grained Image Recognition
    Sun, Jiayin
    Wang, Hong
    Dong, Qiulei
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3891 - 3904
  • [4] Fine-grained Question-Answer sentiment classification with hierarchical graph attention network
    Zeng, Jiandian
    Liu, Tianyi
    Jia, Weijia
    Zhou, Jiantao
    [J]. NEUROCOMPUTING, 2021, 457 : 214 - 224
  • [5] Fine-grained vulnerability detection for medical sensor systems
    Sun, Le
    Wang, Yueyuan
    Li, Huiyun
    Muhammad, Ghulam
    [J]. INTERNET OF THINGS, 2024, 28
  • [6] A Unified Hierarchical Convolutional Neural Network for Fine-grained Traffic Sign Detection
    Huang, Hairu
    Yang, Ming
    Wang, Chunxiang
    Wang, Bing
    [J]. 2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 2733 - 2738
  • [7] Hierarchical attention vision transformer for fine-grained visual classification
    Hu, Xiaobin
    Zhu, Shining
    Peng, Taile
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 91
  • [8] Multimodal Stacked Cross Attention Network for Fine-Grained Fake News Detection
    Huang, Zhongqiang
    Hu, Yuxue
    Zeng, Zhi
    Li, Xiang
    Sha, Ying
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2837 - 2842
  • [9] Fine-grained vehicle type detection and recognition based on dense attention network
    Ke, Xiao
    Zhang, Yufeng
    [J]. NEUROCOMPUTING, 2020, 399 (247-257) : 247 - 257
  • [10] A fast residual attention network for fine-grained unsupervised anomaly detection and localization
    Nafti, Najeh
    Besbes, Olfa
    Ben Abdallah, Asma
    Vacavant, Antoine
    Bedoui, Mohamed Hedi
    [J]. APPLIED SOFT COMPUTING, 2024, 165