Object-attentional untargeted adversarial attack

被引:1
|
作者
Zhou, Chao [1 ,2 ]
Wang, Yuan -Gen [1 ]
Zhu, Guopu [3 ]
机构
[1] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
[2] Guangzhou City Univ Technol, Sch Robot Engn, Guangzhou 510800, Peoples R China
[3] Harbin Inst Technol, Sch Cyberspace Secur, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial attack; Object detection; Salient object detection; Object region; Activation factor;
D O I
10.1016/j.jisa.2024.103710
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks are facing severe threats from adversarial attacks. Most existing black -box attacks fool the target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in the adversarial example. Compared with some smooth regions of an image, the object region generally has more edges and a more complex texture. Thus, small perturbations on it will be more imperceptible. On the other hand, the object region is undoubtedly the decisive part of an image to classification tasks. Motivated by these two facts, we propose an objectattentional adversarial attack method for untargeted attacks. Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection (SOD) region from HVPNet. Furthermore, we design an activation strategy to avoid the reaction caused by the incomplete SOD. Then, we perform an adversarial attack only on the detected object region by leveraging a Simple Black -box Adversarial Attack (SimBA). To verify the proposed method, we create a unique dataset by extracting all the images containing the object defined by COCO from ImageNet-1K, named COCO-Reduced-ImageNet in this paper. Experimental results on ImageNet-1K and COCO-Reduced-ImageNet show that under various system settings, our method yields the adversarial example with better perceptual quality while saving the query budget up to 24.16% compared to the state-of-the-art approaches including SimBA. The code is available at https://github.com/GZHU-DVL/OA.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Untargeted Adversarial Attack on Knowledge Graph Embeddings
    Zhao, Tianzhe
    Chen, Jiaoyan
    Ru, Yanchi
    Lin, Qika
    Geng, Yuxia
    Liu, Jun
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1701 - 1711
  • [2] UNTARGETED ADVERSARIAL ATTACK VIA EXPANDING THE SEMANTIC GAP
    Wu, Aming
    Han, Yahong
    Zhang, Quanxin
    Kuang, Xiaohui
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 514 - 519
  • [3] Camouflaged Adversarial Attack on Object Detector
    Kim, Jeonghun
    Lee, Kyungmin
    Lee, Hyeongkeun
    Yang, Hunmin
    Oh, Se-Yoon
    2021 21ST INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2021), 2021, : 613 - 617
  • [4] Adversarial attack to fool object detector
    Khattar, Sahil
    Krishna, C. Rama
    JOURNAL OF DISCRETE MATHEMATICAL SCIENCES & CRYPTOGRAPHY, 2020, 23 (02): : 547 - 562
  • [5] Spatial Context-Aware Object-Attentional Network for Multi-Label Image Classification
    Zhang, Jialu
    Ren, Jianfeng
    Zhang, Qian
    Liu, Jiang
    Jiang, Xudong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3000 - 3012
  • [6] Pick-Object-Attack: Type-specific adversarial attack for object detection
    Nezami, Omid Mohamad
    Chaturvedi, Akshay
    Dras, Mark
    Garain, Utpal
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 211
  • [7] PICK-OBJECT-ATTACK: Type-specific adversarial attack for object detection
    Mohamad Nezami, Omid
    Chaturvedi, Akshay
    Dras, Mark
    Garain, Utpal
    Computer Vision and Image Understanding, 2021, 211
  • [8] Attentional and adversarial feature mimic for efficient object detection
    Wang, Hongxing
    Chen, Yuquan
    Wu, Mei
    Zhang, Xin
    Huang, Zheng
    Mao, Weiping
    VISUAL COMPUTER, 2023, 39 (02): : 639 - 650
  • [9] Attentional and adversarial feature mimic for efficient object detection
    Hongxing Wang
    Yuquan Chen
    Mei Wu
    Xin Zhang
    Zheng Huang
    Weiping Mao
    The Visual Computer, 2023, 39 : 639 - 650
  • [10] GAA: Ghost Adversarial Attack for Object Tracking
    Lei, Mingyang
    Song, Hong
    Fan, Jingfan
    Xiao, Deqiang
    Ai, Danni
    Gu, Ying
    Yang, Jian
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (03): : 2602 - 2612