Improving Transferable Targeted Adversarial Attack for Object Detection Using RCEN Framework and Logit Loss Optimization

被引:0
|
作者
Ding, Zhiyi [1 ]
Sun, Lei [1 ]
Mao, Xiuqing [1 ]
Dai, Leyu [1 ]
Ding, Ruiyang [1 ]
机构
[1] Informat Engn Univ, Sch Cryptog Engn, Zhengzhou 450000, Peoples R China
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2024年 / 80卷 / 03期
关键词
Object detection; model security; targeted attack; gradient diversity;
D O I
10.32604/cmc.2024.052196
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Object detection finds wide application in various sectors, including autonomous driving, industry, and healthcare. Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples. This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems. Most existing adversarial attack strategies focus primarily on image classification problems, failing to fully exploit the unique characteristics of object detection models, thus resulting in widespread deficiencies in their transferability. Furthermore, previous research has predominantly concentrated on the transferability issues of non-targeted attacks, whereas enhancing the transferability of targeted adversarial examples presents even greater challenges. Traditional attack techniques typically employ cross-entropy as a loss measure, iteratively adjusting adversarial examples to match target categories. However, their inherent limitations restrict their broad applicability and transferability across different models. To address the aforementioned challenges, this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models. Within the framework of iterative attacks, we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories (logit margin). Secondly, a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks. This enhances the diversity of gradients, preventing overfitting to white-box models. Lastly, perturbations are applied only within the specified object's bounding box to reduce the perturbation range, enhancing attack stealthiness. Experiments were conducted on the Microsoft Common Objects in Context (MS COCO) dataset using You Only Look Once version 3 (YOLOv3), You Only Look Once version 8 (YOLOv8), Faster Region-based Convolutional Neural Networks (Faster R-CNN), and RetinaNet. The results demonstrate a significant advantage of the proposed method in black-box settings. Among these, the success rate of RetinaNet transfer attacks reached a maximum of 82.59%.
引用
收藏
页码:4387 / 4412
页数:26
相关论文
共 18 条
  • [1] Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit Calibration
    Weng, Juanjuan
    Luo, Zhiming
    Li, Shaozi
    Sebe, Nicu
    Zhong, Zhun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 3561 - 3574
  • [2] An Enhanced Transferable Adversarial Attack Against Object Detection
    Shi, Guoqiang
    Lin, Zhi
    Peng, Anjie
    Zeng, Hui
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Transferable Adversarial Attacks for Object Detection Using Object-Aware Significant Feature Distortion
    Ding, Xinlong
    Chen, Jiansheng
    Yu, Hongwei
    Shang, Yu
    Qin, Yining
    Ma, Huimin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1546 - 1554
  • [4] Contextual Attribution Maps-Guided Transferable Adversarial Attack for 3D Object Detection
    Cai, Mumuxin
    Wang, Xupeng
    Sohel, Ferdous
    Lei, Hang
    REMOTE SENSING, 2024, 16 (23)
  • [5] A low-frequency adversarial attack method for object detection using generative model
    Yuan, Long
    Sun, Junmei
    Li, Xiumei
    Pan, Zhenxiong
    Liu, Sisi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (22) : 62423 - 62442
  • [6] Adversarial example generation for object detection using a data augmentation framework and momentum
    Ding, Zhiyi
    Sun, Lei
    Mao, Xiuqing
    Dai, Leyu
    Xu, Bayi
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (03) : 2485 - 2497
  • [7] Adversarial example generation for object detection using a data augmentation framework and momentum
    Zhiyi Ding
    Lei Sun
    Xiuqing Mao
    Leyu Dai
    Bayi Xu
    Signal, Image and Video Processing, 2024, 18 : 2485 - 2497
  • [8] Improving Real-world Object Detection Using Balanced Loss
    Shen, Shengyang
    Liu, Zexiang
    Zhao, Bingkun
    Chen, Li
    Zhang, Chongyang
    2020 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 2020,
  • [9] An intelligent framework for attack detection in the internet of things using coati optimization
    Pandiselvi, T.
    Karthik, G. M.
    Jacob, Vinodkumar
    Nancharaiah, B.
    WIRELESS NETWORKS, 2025, 31 (02) : 1719 - 1733
  • [10] A Generalized Framework for Adversarial Attack Detection and Prevention Using Grad-CAM and Clustering Techniques
    Sim, Jeong-Hyun
    Song, Hyun-Min
    SYSTEMS, 2025, 13 (02):