NtNDet: Hardware Trojan detection based on pre-trained language models

被引:0
|
作者
Kuang, Shijie [1 ]
Quan, Zhe [1 ]
Xie, Guoqi [1 ]
Cai, Xiaomin [2 ,3 ]
Li, Keqin [4 ]
机构
[1] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[2] Hunan Univ Finance & Econ, Sch Comp Sci & Technol, Changsha, Peoples R China
[3] Acad Mil Sci, Beijing, Peoples R China
[4] SUNY Coll New Paltz, Dept Comp Sci, New Paltz, NY 12561 USA
关键词
Gate-level netlists; Hardware Trojan detection; Large language model; Netlist-to-natural-language; Transfer learning;
D O I
10.1016/j.eswa.2025.126666
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hardware Trojans (HTs) are malicious modifications embedded in Integrated Circuits (ICs) that pose a significant threat to security. The concealment of HTs and the complexity of IC manufacturing make them difficult to detect. An effective solution is identifying HTs at the gate level through machine learning techniques. However, current methods primarily depend on end-to-end training, which fails to fully utilize the advantages of large-scale pre-trained models and transfer learning. Additionally, they do not take advantage of the extensive background knowledge available in massive datasets. This study proposes an HT detection approach based on large-scale pre-trained NLP models. We propose a novel approach named NtNDet, which includes a method called Netlist-to-Natural-Language (NtN) for converting gate-level netlists into a natural language format suitable for Natural Language Processing (NLP) models. We apply the self-attention mechanism of Transformer to model complex dependencies within the netlist. This is the first application of large-scale pre- trained models for gate-level netlists HT detection, promoting the use of pre-trained models in the security field. Experiments on the Trust-Hub, TRIT-TC, and TRIT-TS benchmarks demonstrate that our approach outperforms existing HT detection methods. The precision increased by at least 5.27%, The True Positive Rate (TPR) by 3.06%, the True Negative Rate (TNR) by 0.01%, and the F1 score increased by about 3.17%, setting a new state-of-the-art in HT detection.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Analyzing Individual Neurons in Pre-trained Language Models
    Durrani, Nadir
    Sajjad, Hassan
    Dalvi, Fahim
    Belinkov, Yonatan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4865 - 4880
  • [32] Emotional Paraphrasing Using Pre-trained Language Models
    Casas, Jacky
    Torche, Samuel
    Daher, Karl
    Mugellini, Elena
    Abou Khaled, Omar
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,
  • [33] Data Augmentation Based on Pre-trained Language Model for Event Detection
    Zhang, Meng
    Xie, Zhiwen
    Liu, Jin
    CCKS 2021 - EVALUATION TRACK, 2022, 1553 : 59 - 68
  • [34] Detection of Chinese Deceptive Reviews Based on Pre-Trained Language Model
    Weng, Chia-Hsien
    Lin, Kuan-Cheng
    Ying, Jia-Ching
    APPLIED SCIENCES-BASEL, 2022, 12 (07):
  • [35] Dynamic Knowledge Distillation for Pre-trained Language Models
    Li, Lei
    Lin, Yankai
    Ren, Shuhuai
    Li, Peng
    Zhou, Jie
    Sun, Xu
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 379 - 389
  • [36] Prompt Tuning for Discriminative Pre-trained Language Models
    Yao, Yuan
    Dong, Bowen
    Zhang, Ao
    Zhang, Zhengyan
    Xie, Ruobing
    Liu, Zhiyuan
    Lin, Leyu
    Sun, Maosong
    Wang, Jianyong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
  • [37] Impact of Morphological Segmentation on Pre-trained Language Models
    Westhelle, Matheus
    Bencke, Luciana
    Moreira, Viviane P.
    INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 402 - 416
  • [38] A Close Look into the Calibration of Pre-trained Language Models
    Chen, Yangyi
    Yuan, Lifan
    Cui, Ganqu
    Liu, Zhiyuan
    Ji, Heng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1343 - 1367
  • [39] Deep Entity Matching with Pre-Trained Language Models
    Li, Yuliang
    Li, Jinfeng
    Suhara, Yoshihiko
    Doan, AnHai
    Tan, Wang-Chiew
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2020, 14 (01): : 50 - 60
  • [40] A Survey of Knowledge Enhanced Pre-Trained Language Models
    Hu, Linmei
    Liu, Zeyi
    Zhao, Ziwang
    Hou, Lei
    Nie, Liqiang
    Li, Juanzi
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1413 - 1430