FPGA Adaptive Neural Network Quantization for Adversarial Image Attack Defense

被引:0
|
作者
Lu, Yufeng [1 ,2 ]
Shi, Xiaokang [2 ,3 ]
Jiang, Jianan [1 ,4 ]
Deng, Hanhui [1 ,4 ]
Wang, Yanwen [2 ,3 ]
Lu, Jiwu [1 ,2 ]
Wu, Di [1 ,4 ]
机构
[1] Hunan Univ, Natl Engn Res Ctr Robot Visual Percept & Control T, Changsha 410082, Hunan, Peoples R China
[2] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Hunan, Peoples R China
[3] Hunan Univ, Shenzhen Res Inst, Shenzhen 518000, Peoples R China
[4] Hunan Univ, Sch Robot, Changsha 410082, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Field programmable gate arrays; Quantization (signal); Computational modeling; Training; Robustness; Neural networks; Real-time systems; Adversarial attack; field-programmable gate array (FPGA); quantized neural networks (QNNs);
D O I
10.1109/TII.2024.3438284
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Quantized neural networks (QNNs) have become a standard operation for efficiently deploying deep learning models on hardware platforms in real application scenarios. An empirical study on German traffic sign recognition benchmark (GTSRB) dataset shows that under the three white-box adversarial attacks of fast gradient sign method, random + fast gradient sign method and basic iterative method, the accuracy of the full quantization model was only 55%, much lower than that of the full precision model (73%). This indicates the adversarial robustness of the full quantization model is much worse than that of the full precision model. To improve the adversarial robustness of the full quantization model, we have designed an adversarial attack defense platform based on field-programmable gate array (FPGA) to jointly optimize the efficiency and robustness of QNNs. Various hardware-friendly techniques such as adversarial training and feature squeezing were studied and transferred to the FPGA platform based on the designed accelerator of QNN. Experiments on the GTSRB dataset show that the adversarial training embedded on FPGA can increase the model's average accuracy by 2.5% on clean data, 15% under white-box attacks, and 4% under black-box attacks, respectively, demonstrating our methodology can improve the robustness of the full quantization model under different adversarial attacks.
引用
收藏
页码:14017 / 14028
页数:12
相关论文
共 50 条
  • [31] Adversarial Attack and Defense in Deep Ranking
    Zhou, Mo
    Wang, Le
    Niu, Zhenxing
    Zhang, Qilin
    Zheng, Nanning
    Hua, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5306 - 5324
  • [32] A DoS attack detection method based on adversarial neural network
    Li, Yang
    Wu, Haiyan
    PEERJ COMPUTER SCIENCE, 2024, 10
  • [33] ImgQuant: Towards Adversarial Defense with Robust Boundary via Dual-Image Quantization
    Lv, Huanhuan
    Jiang, Songru
    Wan, Tuohang
    Chen, Lijun
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT IV, 2025, 15034 : 17 - 31
  • [34] Chaotic neural network quantization and its robustness against adversarial attacks
    Osama, Alaa
    Gadallah, Samar I.
    Said, Lobna A.
    Radwan, Ahmed G.
    Fouda, Mohammed E.
    KNOWLEDGE-BASED SYSTEMS, 2024, 286
  • [35] ADAPTIVE LAYERWISE QUANTIZATION FOR DEEP NEURAL NETWORK COMPRESSION
    Zhu, Xiaotian
    Zhou, Wengang
    Li, Houqiang
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [36] Adaptive Perturbation for Adversarial Attack
    Yuan, Zheng
    Zhang, Jie
    Jiang, Zhaoyan
    Li, Liangliang
    Shan, Shiguang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5663 - 5676
  • [37] Fully Nested Neural Network for Adaptive Compression and Quantization
    Cui, Yufei
    Liu, Ziquan
    Yao, Wuguannan
    Li, Qiao
    Chan, Antoni B.
    Kuo, Tei-wei
    Xue, Chun Jason
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2080 - 2087
  • [38] Adaptive Neural Network Quantization for Lightweight Speaker Verification
    Wang, Haoyu
    Liu, Bei
    Wu, Yifei
    Qian, Yanmin
    INTERSPEECH 2023, 2023, : 5331 - 5335
  • [39] Toward Robust Neural Image Compression: Adversarial Attack and Model Finetuning
    Chen, Tong
    Ma, Zhan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (12) : 7842 - 7856
  • [40] Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization
    Yang, Yulong
    Lin, Chenhao
    Li, Qian
    Zhao, Zhengyu
    Fan, Haoran
    Zhou, Dawei
    Wang, Nannan
    Liu, Tongliang
    Shen, Chao
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3265 - 3278