Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias

被引:0
|
作者
Wang, Jiakai [1 ]
Liu, Aishan [1 ]
Bai, Xiao [1 ]
Liu, Xianglong [1 ,2 ]
机构
[1] State Key Laboratory of Software Development Environment, Beihang University, Beijing,100191, China
[2] Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing,100191, China
关键词
Textures - Uncertainty analysis;
D O I
暂无
中图分类号
学科分类号
摘要
Adversarial examples are inputs with imperceptible perturbations that easily mislead deep neural networks (DNNs). Recently, adversarial patch, with noise confined to a small and localized patch, has emerged for its easy feasibility in real-world scenarios. However, existing strategies failed to generate adversarial patches with strong generalization ability due to the ignorance of the inherent biases of models. In other words, the adversarial patches are always input-specific and fail to attack images from all classes or different models, especially unseen classes and black-box models. To address the problem, this paper proposes a bias-based framework to generate universal adversarial patches with strong generalization ability, which exploits the perceptual bias and attentional bias to improve the attacking ability. Regarding the perceptual bias, since DNNs are strongly biased towards textures, we exploit the hard examples which convey strong model uncertainties and extract a textural patch prior from them by adopting the style similarities. The patch prior is closer to decision boundaries and would promote attacks across classes. As for the attentional bias, motivated by the fact that different models share similar attention patterns towards the same image, we exploit this bias by confusing the model-shared similar attention patterns. Thus, the generated adversarial patches can obtain stronger transferability among different models. Taking Automatic Check-out (ACO) as the typical scenario, extensive experiments including white-box/black-box settings in both digital-world (RPC, the largest ACO related dataset) and physical-world scenario (Taobao and JD, the world's largest online shopping platforms) are conducted. Experimental results demonstrate that our proposed framework outperforms state-of-the-art adversarial patch attack methods. © 1992-2012 IEEE.
引用
收藏
页码:598 / 611
相关论文
共 15 条
  • [1] Universal Adversarial Patch Attack for Automatic Checkout Using Perceptual and Attentional Bias
    Wang, Jiakai
    Liu, Aishan
    Bai, Xiao
    Liu, Xianglong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 598 - 611
  • [2] Defending Person Detection Against Adversarial Patch Attack by Using Universal Defensive Frame
    Yu, Youngjoon
    Lee, Hong Joo
    Lee, Hakmin
    Ro, Yong Man
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6976 - 6990
  • [3] Hard-label Black-box Universal Adversarial Patch Attack
    Tao, Guanhong
    An, Shengwei
    Cheng, Siyuan
    Shen, Guangyu
    Zhang, Xiangyu
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 697 - 714
  • [4] Enabling Fast and Universal Audio Adversarial Attack Using Generative Model
    Xie, Yi
    Li, Zhuohang
    Shi, Cong
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 14129 - 14137
  • [5] ATTACK ON PRACTICAL SPEAKER VERIFICATION SYSTEM USING UNIVERSAL ADVERSARIAL PERTURBATIONS
    Zhang, Weiyi
    Zhao, Shuning
    Liu, Le
    Li, Jianmin
    Cheng, Xingliang
    Zheng, Thomas Fang
    Hu, Xiaolin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2575 - 2579
  • [6] Universal Adversarial Training Using Auxiliary Conditional Generative Model-Based Adversarial Attack Generation
    Dingeto, Hiskias
    Kim, Juntae
    APPLIED SCIENCES-BASEL, 2023, 13 (15):
  • [7] Template-based universal adversarial attack for synthetic aperture radar automatic target recognition network
    Liu, Wei
    Wan, Xuanshen
    Niu, Chaoyang
    Lu, Wanjie
    Li, Yuanli
    IET RADAR SONAR AND NAVIGATION, 2025, 19 (01):
  • [8] Attacks on state-of-the-art face recognition using attentional adversarial attack generative network
    Lu Yang
    Qing Song
    Yingqi Wu
    Multimedia Tools and Applications, 2021, 80 : 855 - 875
  • [9] Attacks on state-of-the-art face recognition using attentional adversarial attack generative network
    Yang, Lu
    Song, Qing
    Wu, Yingqi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (01) : 855 - 875
  • [10] M-SAN: a patch-based transferable adversarial attack using the multi-stack adversarial network
    Agrawal, Khushabu
    Bhatnagar, Charul
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)