CARLA-GEAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision Models

被引:0
|
作者
Nesti, Federico [1 ]
Rossolini, Giulio [1 ]
D'Amico, Gianluca [1 ]
Biondi, Alessandro [1 ]
Buttazzo, Giorgio [1 ]
机构
[1] Scuola Super Sant Anna, Dept Excellence Robot & AI, Via S Lorenzo 26, I-56127 Pisa, Italy
关键词
Robustness; Task analysis; Autonomous vehicles; Systematics; Object detection; Benchmark testing; Three-dimensional displays; Adversarial robustness; autonomous driving; CARLA simulator; adversarial defenses;
D O I
暂无
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Adversarial examples represent a serious threat for deep neural networks in several application domains and a huge amount of work has been produced to investigate them and mitigate their effects. Nevertheless, no much work has been devoted to the generation of datasets specifically designed to evaluate the adversarial robustness of neural models. This paper presents CARLA-GEAR, a tool for the automatic generation of photo-realistic synthetic datasets related to driving scenarios that can be used for a systematic evaluation of the adversarial robustness of neural models against physical adversarial patches, as well as for comparing the performance of different adversarial defense/detection methods. The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving. The adversarial patches included in the generated datasets are attached to billboards or the back of a truck and are crafted by using state-of-the-art white-box attack strategies to maximize the prediction error of the model under test. Finally, the paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GEAR might be used in future work as a benchmark for adversarial defense in the real world. All the code and datasets used in this paper are available at http://carlagear.retis.santannapisa.it.
引用
收藏
页码:9840 / 9851
页数:12
相关论文
共 45 条
  • [21] Robustness in deep learning models for medical diagnostics: security and adversarial challenges towards robust AI applications
    Javed, Haseeb
    El-Sappagh, Shaker
    Abuhmed, Tamer
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 58 (01)
  • [22] Performance Evaluation of Deep Learning Models on Mammogram Classification Using Small Dataset
    Adedigba, Adeyinka P.
    Adeshina, Steve A.
    Aibinu, Abiodun M.
    BIOENGINEERING-BASEL, 2022, 9 (04):
  • [23] Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset
    Meng, Chuizheng
    Trinh, Loc
    Xu, Nan
    Enouen, James
    Liu, Yan
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [24] Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset
    Chuizheng Meng
    Loc Trinh
    Nan Xu
    James Enouen
    Yan Liu
    Scientific Reports, 12
  • [25] Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition
    Han, Yunjo
    Zhang, Panyu
    Park, Minseo
    Lee, Uichin
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2024, 8 (04):
  • [26] Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
    Mumuni, Fuseini
    Mumuni, Alhassan
    COGNITIVE SYSTEMS RESEARCH, 2024, 84
  • [27] Evaluation of the impact of physical adversarial attacks on deep learning models for classifying covid cases
    de Aguiar, Erikson J.
    Marcomini, Karem D.
    Quirino, Felipe A.
    Gutierrez, Marco A.
    Traina, Caetano, Jr.
    Traina, Agma J. M.
    MEDICAL IMAGING 2022: COMPUTER-AIDED DIAGNOSIS, 2022, 12033
  • [28] Physically adversarial thermal hydraulics evaluation of deep learning models for pressurized water reactors
    Shriver, Forrest
    Watson, Justin
    PROGRESS IN NUCLEAR ENERGY, 2022, 146
  • [29] A Survey of Robustness and Safety of 2D and 3D Deep Learning Models against Adversarial Attacks
    Li, Yanjie
    Xie, Bin
    Guo, Songtao
    Yang, Yuanyuan
    Xiao, Bin
    ACM COMPUTING SURVEYS, 2024, 56 (06)
  • [30] DOME-T: Adversarial Computer Vision Attack on Deep Learning Models Based on Tchebichef Image Moments
    Maliamanis, T.
    Papakostas, G. A.
    THIRTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2020), 2021, 11605