Research on Black-box Attack Algorithm by Targeting ID Card Text Recognition

被引:0
|
作者
Xu C.-K. [1 ,2 ]
Feng W.-D. [1 ,2 ]
Zhang C.-J. [1 ,2 ]
Zheng X.-L. [3 ,4 ,5 ]
Zhang H. [6 ]
Wang F.-Y. [3 ,4 ,5 ]
机构
[1] The Institute of Information Science, School of Computer and Information Technology, Beijing Jiaotong University, Beijing
[2] Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing
[3] State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing
[4] State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing
[5] School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing
[6] School of Transportation Science and Engineering, Beihang University, Beijing
来源
基金
中国国家自然科学基金;
关键词
Adversarial examples; binarization mask; black-box attack; ID card text recognition; physical world;
D O I
10.16383/j.aas.c230344
中图分类号
X9 [安全科学];
学科分类号
0837 ;
摘要
Identity card authentication scenarios often use text recognition models to extract, recognize, and authenticate ID card images, which poses a significant privacy breach risk. Besides, most of current adversarial attack algorithms for text recognition models only consider simple background data (such as print) and white-box conditions, making it difficult to achieve ideal attack effects in the physical world, and is not suitable for complex backgrounds, data, and black-box conditions. In order to alleviate the above problems, this paper proposes a black-box attack algorithm for the ID card text recognition model by taking into account the more complex image background, more stringent black-box conditions and attack effects in the physical world. By using the transfer-based black-box attack algorithm, the proposed algorithm introduces binarization mask and space transformation, which improves the visual effect of adversarial examples and the robustness in the physical world while ensuring the attack success rate. By exploring the performance upper limit and the influence of key hyper-parameters of the transfer-based black-box attack algorithm under different norm constraints, the proposed algorithm achieves 100% attack success rate on the Baidu ID card recognition model. The ID card dataset will be made publicly available in the future. © 2024 Science Press. All rights reserved.
引用
收藏
页码:103 / 120
页数:17
相关论文
共 44 条
  • [11] Wang B H, Li Y Q, Zhou P., Bandits for structure perturbation-based black-box attacks to graph neural networks with theoretical guarantees, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13369-13377, (2022)
  • [12] Jia X J, Zhang Y, Wu B Y, Ma K, Wang J, Cao X C., LAS-AT: Adversarial training with learnable attack strategy, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13388-13398, (2022)
  • [13] Li T, Wu Y W, Chen S Z, Fang K, Huang X L., Subspace adversarial training, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13399-13408, (2022)
  • [14] Xu C K, Zhang C J, Yang Y W, Yang H Z, Bo Y J, Li D Y, Et al., Accelerate adversarial training with loss guided propagation for robust image classification, Information Processing & Management, 60, 1, (2023)
  • [15] Chen Z Y, Li B, Xu J H, Wu S, Ding S H, Zhang W Q., Towards practical certifiable patch defense with vision transformer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15127-15137, (2022)
  • [16] Suryanto N, Kim Y, Kang H, Larasati H T, Yun Y, Le T T H, Et al., DTA: Physical camouflage attacks using differentiable transformation network, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15284-15293, (2022)
  • [17] Zhong Y Q, Liu X M, Zhai D M, Jiang J J, Ji X Y., Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15324-15333, (2022)
  • [18] Chen P Y, Zhang H, Sharma Y, Yi J F, Hsieh C J., ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15-26, (2017)
  • [19] Ilyas A, Engstrom L, Athalye A, Lin J., Black-box adversarial attacks with limited queries and information, Proceedings of the 35th International Conference on Machine Learning (ICML), pp. 2137-2146, (2018)
  • [20] Uesato J, O'donoghue B, Kohli P, Oord A., Adversarial risk and the dangers of evaluating against weak attacks, Proceedings of the 35th International Conference on Machine Learning (ICML), pp. 5025-5034, (2018)