CARLA-GEAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision Models

被引:0
|
作者
Nesti, Federico [1 ]
Rossolini, Giulio [1 ]
D'Amico, Gianluca [1 ]
Biondi, Alessandro [1 ]
Buttazzo, Giorgio [1 ]
机构
[1] Scuola Super Sant Anna, Dept Excellence Robot & AI, Via S Lorenzo 26, I-56127 Pisa, Italy
关键词
Robustness; Task analysis; Autonomous vehicles; Systematics; Object detection; Benchmark testing; Three-dimensional displays; Adversarial robustness; autonomous driving; CARLA simulator; adversarial defenses;
D O I
暂无
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Adversarial examples represent a serious threat for deep neural networks in several application domains and a huge amount of work has been produced to investigate them and mitigate their effects. Nevertheless, no much work has been devoted to the generation of datasets specifically designed to evaluate the adversarial robustness of neural models. This paper presents CARLA-GEAR, a tool for the automatic generation of photo-realistic synthetic datasets related to driving scenarios that can be used for a systematic evaluation of the adversarial robustness of neural models against physical adversarial patches, as well as for comparing the performance of different adversarial defense/detection methods. The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving. The adversarial patches included in the generated datasets are attached to billboards or the back of a truck and are crafted by using state-of-the-art white-box attack strategies to maximize the prediction error of the model under test. Finally, the paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GEAR might be used in future work as a benchmark for adversarial defense in the real world. All the code and datasets used in this paper are available at http://carlagear.retis.santannapisa.it.
引用
收藏
页码:9840 / 9851
页数:12
相关论文
共 45 条
  • [1] CARLA-GEAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision Models
    Nesti, Federico
    Rossolini, Giulio
    D'Amico, Gianluca
    Biondi, Alessandro
    Buttazzo, Giorgio
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) : 9840 - 9851
  • [2] CANARY: An Adversarial Robustness Evaluation Platform for Deep Learning Models on Image Classification
    Sun, Jiazheng
    Chen, Li
    Xia, Chenxiao
    Zhang, Da
    Huang, Rong
    Qiu, Zhi
    Xiong, Wenqi
    Zheng, Jun
    Tan, Yu-An
    ELECTRONICS, 2023, 12 (17)
  • [3] Robustness of Deep Learning Models for Vision Tasks
    Lee, Youngseok
    Kim, Jongweon
    APPLIED SCIENCES-BASEL, 2023, 13 (07):
  • [4] On the Robustness of Deep Learning Models to Universal Adversarial Attack
    Karim, Rezaul
    Islam, Md Amirul
    Mohammed, Noman
    Bruce, Neil D. B.
    2018 15TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2018, : 55 - 62
  • [5] Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey
    Ding, Jia
    Xu, Zhiwu
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 396 - 408
  • [6] The Impact of Model Variations on the Robustness of Deep Learning Models in Adversarial Settings
    Juraev, Firuz
    Abuhamad, Mohammed
    Woo, Simon S.
    Thiruvathukal, George K.
    Abuhmed, Tamer
    2024 SILICON VALLEY CYBERSECURITY CONFERENCE, SVCC 2024, 2024,
  • [7] ADVRET: An Adversarial Robustness Evaluating and Testing Platform for Deep Learning Models
    Ren, Fei
    Yang, Yonghui
    Hu, Chi
    Zhou, Yuyao
    Ma, Siyou
    2021 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C 2021), 2021, : 9 - 14
  • [8] Adversarial Robustness for Deep Learning-Based Wildfire Prediction Models
    Ide, Ryo
    Yang, Lei
    FIRE-SWITZERLAND, 2025, 8 (02):
  • [9] Adversarial training and attribution methods enable evaluation of robustness and interpretability of deep learning models for image classification
    Santos, Flavio A. O.
    Zanchettin, Cleber
    Lei, Weihua
    Amaral, Luis A. Nunes
    PHYSICAL REVIEW E, 2024, 110 (05)
  • [10] Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps
    Huang, Yujin
    Hu, Han
    Chen, Chunyang
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING IN PRACTICE (ICSE-SEIP 2021), 2021, : 101 - 110