Enhancing smart contract security: Leveraging pre-trained language models for advanced vulnerability detection

被引:0
|
作者
He F. [1 ]
Li F. [1 ]
Liang P. [1 ]
机构
[1] College of Blockchain Industry, Chengdu University of Information Technology, Sichuan, Chengdu
来源
IET Blockchain | 2024年 / 4卷 / S1期
关键词
artificial intelligence; blockchain applications and digital technology; blockchains; contracts; decentralized applications;
D O I
10.1049/blc2.12072
中图分类号
学科分类号
摘要
The burgeoning interest in decentralized applications (Dapps), spurred by advancements in blockchain technology, underscores the critical role of smart contracts. However, many Dapp users, often without deep knowledge of smart contracts, face financial risks due to hidden vulnerabilities. Traditional methods for detecting these vulnerabilities, including manual inspections and automated static analysis, are plagued by issues such as high rates of false positives and overlooked security flaws. To combat this, the article introduces an innovative approach using the bidirectional encoder representations from transformers (BERT)-ATT-BiLSTM model for identifying potential weaknesses in smart contracts. This method leverages the BERT pre-trained model to discern semantic features from contract opcodes, which are then refined using a Bidirectional Long Short-Term Memory Network (BiLSTM) and augmented by an attention mechanism that prioritizes critical features. The goal is to improve the model's generalization ability and enhance detection accuracy. Experiments on various publicly available smart contract datasets confirm the model's superior performance, outperforming previous methods in key metrics like accuracy, F1-score, and recall. This research not only offers a powerful tool to bolster smart contract security, mitigating financial risks for average users, but also serves as a valuable reference for advancements in natural language processing and deep learning. © 2024 The Authors. IET Blockchain published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
引用
收藏
页码:543 / 554
相关论文
共 50 条
  • [31] Probing for Hyperbole in Pre-Trained Language Models
    Schneidermann, Nina Skovgaard
    Hershcovich, Daniel
    Pedersen, Bolette Sandford
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
  • [32] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [33] DFEPT: Data Flow Embedding for Enhancing Pre-Trained Model Based Vulnerability Detection
    Jiang, Zhonghao
    Sun, Weifeng
    Gu, Xiaoyan
    Wu, Jiaxin
    Wen, Tao
    Hu, Haibo
    Yan, Meng
    PROCEEDINGS OF THE 15TH ASIA-PACIFIC SYMPOSIUM ON INTERNETWARE, INTERNETWARE 2024, 2024, : 95 - 104
  • [34] Jailbreaking Pre-trained Large Language Models Towards Hardware Vulnerability Insertion Ability
    Wan, Gwok-Waa
    Wong, Sam-Zaak
    Wang, Xi
    PROCEEDING OF THE GREAT LAKES SYMPOSIUM ON VLSI 2024, GLSVLSI 2024, 2024, : 579 - 582
  • [35] Porter 6: Protein Secondary Structure Prediction by Leveraging Pre-Trained Language Models (PLMs)
    Alanazi, Wafa
    Meng, Di
    Pollastri, Gianluca
    INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES, 2025, 26 (01)
  • [36] Enhancing Machine-Generated Text Detection: Adversarial Fine-Tuning of Pre-Trained Language Models
    Hee Lee, Dong
    Jang, Beakcheol
    IEEE ACCESS, 2024, 12 : 65333 - 65340
  • [37] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [38] Enhancing Language Generation with Effective Checkpoints of Pre-trained Language Model
    Park, Jeonghyeok
    Zhao, Hai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2686 - 2694
  • [39] Few-Sample Named Entity Recognition for Security Vulnerability Reports by Fine-Tuning Pre-trained Language Models
    Yang, Guanqun
    Dineen, Shay
    Lin, Zhipeng
    Liu, Xueqing
    DEPLOYABLE MACHINE LEARNING FOR SECURITY DEFENSE, MLHAT 2021, 2021, 1482 : 55 - 78
  • [40] Discrimination Bias Detection Through Categorical Association in Pre-Trained Language Models
    Dusi, Michele
    Arici, Nicola
    Gerevini, Alfonso Emilio
    Putelli, Luca
    Serina, Ivan
    IEEE ACCESS, 2024, 12 : 162651 - 162667