Enhancing smart contract security: Leveraging pre-trained language models for advanced vulnerability detection

被引:0
|
作者
He F. [1 ]
Li F. [1 ]
Liang P. [1 ]
机构
[1] College of Blockchain Industry, Chengdu University of Information Technology, Sichuan, Chengdu
来源
IET Blockchain | 2024年 / 4卷 / S1期
关键词
artificial intelligence; blockchain applications and digital technology; blockchains; contracts; decentralized applications;
D O I
10.1049/blc2.12072
中图分类号
学科分类号
摘要
The burgeoning interest in decentralized applications (Dapps), spurred by advancements in blockchain technology, underscores the critical role of smart contracts. However, many Dapp users, often without deep knowledge of smart contracts, face financial risks due to hidden vulnerabilities. Traditional methods for detecting these vulnerabilities, including manual inspections and automated static analysis, are plagued by issues such as high rates of false positives and overlooked security flaws. To combat this, the article introduces an innovative approach using the bidirectional encoder representations from transformers (BERT)-ATT-BiLSTM model for identifying potential weaknesses in smart contracts. This method leverages the BERT pre-trained model to discern semantic features from contract opcodes, which are then refined using a Bidirectional Long Short-Term Memory Network (BiLSTM) and augmented by an attention mechanism that prioritizes critical features. The goal is to improve the model's generalization ability and enhance detection accuracy. Experiments on various publicly available smart contract datasets confirm the model's superior performance, outperforming previous methods in key metrics like accuracy, F1-score, and recall. This research not only offers a powerful tool to bolster smart contract security, mitigating financial risks for average users, but also serves as a valuable reference for advancements in natural language processing and deep learning. © 2024 The Authors. IET Blockchain published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
引用
收藏
页码:543 / 554
相关论文
共 50 条
  • [1] Smart Contract Vulnerability Detection with Self-ensemble Pre-trained Language Models
    Dai, Chaofan
    Ding, Huahua
    Ma, Wubin
    Wu, Yahui
    2024 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS, CITS 2024, 2024, : 118 - 125
  • [2] Leveraging Pre-trained Language Models for Gender Debiasing
    Jain, Nishtha
    Popovic, Maja
    Groves, Declan
    Specia, Lucia
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 2188 - 2195
  • [3] Leveraging pre-trained language models for code generation
    Soliman, Ahmed
    Shaheen, Samir
    Hadhoud, Mayada
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (03) : 3955 - 3980
  • [4] Advanced Smart Contract Vulnerability Detection using Large Language Models
    Erfan, Fatemeh
    Yahyatabar, Mohammad
    Bellaiche, Martine
    Halabi, Talal
    2024 8TH CYBER SECURITY IN NETWORKING CONFERENCE, CSNET, 2024, : 289 - 296
  • [5] Vulnerability Analysis of Continuous Prompts for Pre-trained Language Models
    Li, Zhicheng
    Shi, Yundi
    Sheng, Xuan
    Yin, Changchun
    Zhou, Lu
    Li, Piji
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IX, 2023, 14262 : 508 - 519
  • [6] Interpreting Art by Leveraging Pre-Trained Models
    Penzel, Niklas
    Denzler, Joachim
    2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [7] Leveraging pre-trained language models for mining microbiome-disease relationships
    Karkera, Nikitha
    Acharya, Sathwik
    Palaniappan, Sucheendra K.
    BMC BIOINFORMATICS, 2023, 24 (01)
  • [8] Leveraging pre-trained language models for mining microbiome-disease relationships
    Nikitha Karkera
    Sathwik Acharya
    Sucheendra K. Palaniappan
    BMC Bioinformatics, 24
  • [9] CreativeBot: a Creative Storyteller Agent Developed by Leveraging Pre-trained Language Models
    Elgarf, Maha
    Peters, Christopher
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 13438 - 13444
  • [10] Enhancing Turkish Sentiment Analysis Using Pre-Trained Language Models
    Koksal, Omer
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,