Enhancing smart contract security: Leveraging pre-trained language models for advanced vulnerability detection

被引:0
|
作者
He F. [1 ]
Li F. [1 ]
Liang P. [1 ]
机构
[1] College of Blockchain Industry, Chengdu University of Information Technology, Sichuan, Chengdu
来源
IET Blockchain | 2024年 / 4卷 / S1期
关键词
artificial intelligence; blockchain applications and digital technology; blockchains; contracts; decentralized applications;
D O I
10.1049/blc2.12072
中图分类号
学科分类号
摘要
The burgeoning interest in decentralized applications (Dapps), spurred by advancements in blockchain technology, underscores the critical role of smart contracts. However, many Dapp users, often without deep knowledge of smart contracts, face financial risks due to hidden vulnerabilities. Traditional methods for detecting these vulnerabilities, including manual inspections and automated static analysis, are plagued by issues such as high rates of false positives and overlooked security flaws. To combat this, the article introduces an innovative approach using the bidirectional encoder representations from transformers (BERT)-ATT-BiLSTM model for identifying potential weaknesses in smart contracts. This method leverages the BERT pre-trained model to discern semantic features from contract opcodes, which are then refined using a Bidirectional Long Short-Term Memory Network (BiLSTM) and augmented by an attention mechanism that prioritizes critical features. The goal is to improve the model's generalization ability and enhance detection accuracy. Experiments on various publicly available smart contract datasets confirm the model's superior performance, outperforming previous methods in key metrics like accuracy, F1-score, and recall. This research not only offers a powerful tool to bolster smart contract security, mitigating financial risks for average users, but also serves as a valuable reference for advancements in natural language processing and deep learning. © 2024 The Authors. IET Blockchain published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
引用
收藏
页码:543 / 554
相关论文
共 50 条
  • [41] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader
    Xu, Weiwen
    Li, Xin
    Zhang, Wenxuan
    Zhou, Meng
    Lam, Wai
    Si, Luo
    Bing, Lidong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [42] Pre-trained models for natural language processing: A survey
    Qiu XiPeng
    Sun TianXiang
    Xu YiGe
    Shao YunFan
    Dai Ning
    Huang XuanJing
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (10) : 1872 - 1897
  • [43] Probing Pre-Trained Language Models for Disease Knowledge
    Alghanmi, Israa
    Espinosa-Anke, Luis
    Schockaert, Steven
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3023 - 3033
  • [44] Analyzing Individual Neurons in Pre-trained Language Models
    Durrani, Nadir
    Sajjad, Hassan
    Dalvi, Fahim
    Belinkov, Yonatan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880
  • [45] Emotional Paraphrasing Using Pre-trained Language Models
    Casas, Jacky
    Torche, Samuel
    Daher, Karl
    Mugellini, Elena
    Abou Khaled, Omar
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,
  • [46] Leveraging Pre-Trained Language Model for Summary Generation on Short Text
    Zhao, Shuai
    You, Fucheng
    Liu, Zeng Yuan
    IEEE ACCESS, 2020, 8 : 228798 - 228803
  • [47] Dynamic Knowledge Distillation for Pre-trained Language Models
    Li, Lei
    Lin, Yankai
    Ren, Shuhuai
    Li, Peng
    Zhou, Jie
    Sun, Xu
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 379 - 389
  • [48] Prompt Tuning for Discriminative Pre-trained Language Models
    Yao, Yuan
    Dong, Bowen
    Zhang, Ao
    Zhang, Zhengyan
    Xie, Ruobing
    Liu, Zhiyuan
    Lin, Leyu
    Sun, Maosong
    Wang, Jianyong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
  • [49] Impact of Morphological Segmentation on Pre-trained Language Models
    Westhelle, Matheus
    Bencke, Luciana
    Moreira, Viviane P.
    INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 402 - 416
  • [50] A Close Look into the Calibration of Pre-trained Language Models
    Chen, Yangyi
    Yuan, Lifan
    Cui, Ganqu
    Liu, Zhiyuan
    Ji, Heng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1343 - 1367