FPGA Adaptive Neural Network Quantization for Adversarial Image Attack Defense

被引:0
|
作者
Lu, Yufeng [1 ,2 ]
Shi, Xiaokang [2 ,3 ]
Jiang, Jianan [1 ,4 ]
Deng, Hanhui [1 ,4 ]
Wang, Yanwen [2 ,3 ]
Lu, Jiwu [1 ,2 ]
Wu, Di [1 ,4 ]
机构
[1] Hunan Univ, Natl Engn Res Ctr Robot Visual Percept & Control T, Changsha 410082, Hunan, Peoples R China
[2] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Hunan, Peoples R China
[3] Hunan Univ, Shenzhen Res Inst, Shenzhen 518000, Peoples R China
[4] Hunan Univ, Sch Robot, Changsha 410082, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Field programmable gate arrays; Quantization (signal); Computational modeling; Training; Robustness; Neural networks; Real-time systems; Adversarial attack; field-programmable gate array (FPGA); quantized neural networks (QNNs);
D O I
10.1109/TII.2024.3438284
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Quantized neural networks (QNNs) have become a standard operation for efficiently deploying deep learning models on hardware platforms in real application scenarios. An empirical study on German traffic sign recognition benchmark (GTSRB) dataset shows that under the three white-box adversarial attacks of fast gradient sign method, random + fast gradient sign method and basic iterative method, the accuracy of the full quantization model was only 55%, much lower than that of the full precision model (73%). This indicates the adversarial robustness of the full quantization model is much worse than that of the full precision model. To improve the adversarial robustness of the full quantization model, we have designed an adversarial attack defense platform based on field-programmable gate array (FPGA) to jointly optimize the efficiency and robustness of QNNs. Various hardware-friendly techniques such as adversarial training and feature squeezing were studied and transferred to the FPGA platform based on the designed accelerator of QNN. Experiments on the GTSRB dataset show that the adversarial training embedded on FPGA can increase the model's average accuracy by 2.5% on clean data, 15% under white-box attacks, and 4% under black-box attacks, respectively, demonstrating our methodology can improve the robustness of the full quantization model under different adversarial attacks.
引用
收藏
页码:14017 / 14028
页数:12
相关论文
共 50 条
  • [21] Low-epsilon adversarial attack against a neural network online image stream classifier
    Arjomandi, Hossein Mohasel
    Khalooei, Mohammad
    Amirmazlaghani, Maryam
    APPLIED SOFT COMPUTING, 2023, 147
  • [22] One-index vector quantization based adversarial attack on image classification
    Fan, Haiju
    Qin, Xiaona
    Chen, Shuang
    Shum, Hubert P. H.
    Li, Ming
    PATTERN RECOGNITION LETTERS, 2024, 186 : 47 - 56
  • [23] Adaptive Image Transformations for Transfer-Based Adversarial Attack
    Yuan, Zheng
    Zhang, Jie
    Shan, Shiguang
    COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 : 1 - 17
  • [24] Dynamics-Aware Adversarial Attack of Adaptive Neural Networks
    Tao, An
    Duan, Yueqi
    Wang, Yingqi
    Lu, Jiwen
    Zhou, Jie
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 5505 - 5518
  • [25] An adaptive vector quantization based on neural network
    Qiu, BS
    Qi, JQ
    An, P
    Zhang, DC
    ICSP '96 - 1996 3RD INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, PROCEEDINGS, VOLS I AND II, 1996, : 1413 - 1416
  • [26] REINFORCING THE ROBUSTNESS OF A DEEP NEURAL NETWORK TO ADVERSARIAL EXAMPLES BY USING COLOR QUANTIZATION OF TRAINING IMAGE DATA
    Miyazato, Shuntaro
    Wang, Xueting
    Yamasaki, Toshihiko
    Aizawa, Kiyoharu
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 884 - 888
  • [27] Attack as the Best Defense: Nullifying Image-to-image Translation GANs via Limit-aware Adversarial Attack
    Yeh, Chin-Yuan
    Chen, Hsi-Wen
    Shuai, Hong-Han
    Yang, De-Nian
    Chen, Ming-Syan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 16168 - 16177
  • [28] Performance Improvement of Image-Reconstruction-Based Defense against Adversarial Attack
    Lee, Jungeun
    Yang, Hoeseok
    ELECTRONICS, 2022, 11 (15)
  • [29] Attack-invariant attention feature for adversarial defense in hyperspectral image classification
    Shi, Cheng
    Liu, Ying
    Zhao, Minghua
    Pun, Chi-Man
    Miao, Qiguang
    PATTERN RECOGNITION, 2024, 145
  • [30] Output-correlated adversarial attack for image translation network
    Liu, Peiyuan
    Sun, Lei
    Mao, XiuQing
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (02)