Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification

被引:0
|
作者
Jiang, Xiaoyu [1 ,2 ]
Kong, Xiangyin [2 ]
Zheng, Junhua [3 ]
Ge, Zhiqiang [4 ]
Zhang, Xinmin [2 ]
Song, Zhihuan [1 ,2 ]
机构
[1] Guangdong Univ Petrochem Technol, Sch Automat, Maoming 525000, Peoples R China
[2] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310027, Peoples R China
[3] Zhejiang Univ Sci & Technol, Sch Automat & Elect Engn, Hangzhou 310023, Peoples R China
[4] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Artificial neural networks; Perturbation methods; Closed box; Predictive models; Iterative methods; Indexes; Adversarial attack; classification confidence score; imperfect DNNs; iterative targeted attacks; model evaluation;
D O I
10.1109/TII.2024.3449999
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, deep neural networks (DNNs) have been widely applied in fault classification tasks. Their adversarial security has received attention, but little consideration has been given to the robustness of adversarial attacks against imperfect DNNs. Owing to the data scarcity and quality deficiencies prevalent in industrial data, the performance of DNNs may be severely constrained. In addition, black-box attacks against industrial fault classification models have difficulty in obtaining sufficient and comprehensive data for constructing surrogate models with perfect decision boundaries. To address this gap, this article analyzes the outcomes of adversarial attacks on imperfect DNNs and categorizes their decision scenarios. Subsequently, building on this analysis, we propose a robust adversarial attack strategy that transforms traditional adversarial attacks into an iterative targeted attack (ITA). The ITA framework begins with an evaluation of DNNs, during which a classification confidence score (CCS) is designed. Using the CCS and the prediction probability of the data, the labels and sequences for targeted attacks are defined. The adversarial attacks are then carried out by iteratively selecting attack targets and using gradient optimization. Experimental results on both a benchmark dataset and an industrial case demonstrate the superiority of the proposed method.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    [J]. IEEE ACCESS, 2022, 10 : 33602 - 33615
  • [32] Compressive imaging for defending deep neural networks from adversarial attacks
    Kravets, Vladislav
    Javidi, Bahram
    Stern, Adrian
    [J]. OPTICS LETTERS, 2021, 46 (08) : 1951 - 1954
  • [33] Fast adversarial attacks to deep neural networks through gradual sparsification
    Amini, Sajjad
    Heshmati, Alireza
    Ghaemmaghami, Shahrokh
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [34] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [35] Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks
    Krithivasan, Sarada
    Sen, Sanchari
    Raghunathan, Anand
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (11) : 4129 - 4141
  • [36] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    [J]. PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [37] Mitigating Adversarial Attacks for Deep Neural Networks by Input Deformation and Augmentation
    Qiu, Pengfei
    Wang, Qian
    Wang, Dongsheng
    Lyu, Yongqiang
    Lu, Zhaojun
    Qu, Gang
    [J]. 2020 25TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2020, 2020, : 157 - 162
  • [38] Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
    Wang, Siyue
    Wang, Xiao
    Zhao, Pu
    Wen, Wujie
    Kaeli, David
    Chin, Peter
    Lin, Xue
    [J]. 2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [39] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [40] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,