Research on Black-box Attack Algorithm by Targeting ID Card Text Recognition

被引:0
|
作者
Xu C.-K. [1 ,2 ]
Feng W.-D. [1 ,2 ]
Zhang C.-J. [1 ,2 ]
Zheng X.-L. [3 ,4 ,5 ]
Zhang H. [6 ]
Wang F.-Y. [3 ,4 ,5 ]
机构
[1] The Institute of Information Science, School of Computer and Information Technology, Beijing Jiaotong University, Beijing
[2] Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing
[3] State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing
[4] State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing
[5] School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing
[6] School of Transportation Science and Engineering, Beihang University, Beijing
来源
基金
中国国家自然科学基金;
关键词
Adversarial examples; binarization mask; black-box attack; ID card text recognition; physical world;
D O I
10.16383/j.aas.c230344
中图分类号
X9 [安全科学];
学科分类号
0837 ;
摘要
Identity card authentication scenarios often use text recognition models to extract, recognize, and authenticate ID card images, which poses a significant privacy breach risk. Besides, most of current adversarial attack algorithms for text recognition models only consider simple background data (such as print) and white-box conditions, making it difficult to achieve ideal attack effects in the physical world, and is not suitable for complex backgrounds, data, and black-box conditions. In order to alleviate the above problems, this paper proposes a black-box attack algorithm for the ID card text recognition model by taking into account the more complex image background, more stringent black-box conditions and attack effects in the physical world. By using the transfer-based black-box attack algorithm, the proposed algorithm introduces binarization mask and space transformation, which improves the visual effect of adversarial examples and the robustness in the physical world while ensuring the attack success rate. By exploring the performance upper limit and the influence of key hyper-parameters of the transfer-based black-box attack algorithm under different norm constraints, the proposed algorithm achieves 100% attack success rate on the Baidu ID card recognition model. The ID card dataset will be made publicly available in the future. © 2024 Science Press. All rights reserved.
引用
收藏
页码:103 / 120
页数:17
相关论文
共 44 条
  • [1] Krizhevsky A, Sutskever I, Hinton G E., ImageNet classification with deep convolutional neural networks, Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1097-1105, (2012)
  • [2] Liu Z, Mao H Z, Wu C Y, Feichtenhofer C, Darrell T, Xie S N., A ConvNet for the 2020s, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11966-11976, (2022)
  • [3] Bahdanau D, Chorowski J, Serdyuk D, Brakel P, Bengio Y., End-to-end attention-based large vocabulary speech recognition, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4945-4949, (2016)
  • [4] Afkanpour A, Adeel S, Bassani H, Epshteyn A, Fan H B, Jones I, Et al., BERT for long documents: A case study of automated ICD coding, Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI), pp. 100-107, (2022)
  • [5] Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C L, Mishkin P, Et al., Training language models to follow instructions with human feedback, Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS), (2022)
  • [6] Silver D, Huang A, Maddison C J, Guez A, Sifre L, van den driessche G, Et al., Mastering the game of Go with deep neural networks and tree search, Nature, 529, 7587, pp. 484-489, (2016)
  • [7] Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Et al., Highly accurate protein structure prediction with AlphaFold, Nature, 596, 7873, pp. 583-589, (2021)
  • [8] Sallam M., ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns, Healthcare, 11, 6, (2023)
  • [9] Wang J K, Yin Z X, Hu P F, Liu A S, Tao R S, Qin H T, Et al., Defensive patches for robust recognition in the physical world, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2446-2455, (2022)
  • [10] Yuan X Y, He P, Zhu Q L, Li X L., Adversarial examples: Attacks and defenses for deep learning, IEEE Transactions on Neural Networks and Learning Systems, 30, 9, pp. 2805-2824, (2019)