Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification

被引:0
|
作者
Jiang, Xiaoyu [1 ,2 ]
Kong, Xiangyin [2 ]
Zheng, Junhua [3 ]
Ge, Zhiqiang [4 ]
Zhang, Xinmin [2 ]
Song, Zhihuan [1 ,2 ]
机构
[1] Guangdong Univ Petrochem Technol, Sch Automat, Maoming 525000, Peoples R China
[2] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310027, Peoples R China
[3] Zhejiang Univ Sci & Technol, Sch Automat & Elect Engn, Hangzhou 310023, Peoples R China
[4] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Artificial neural networks; Perturbation methods; Closed box; Predictive models; Iterative methods; Indexes; Adversarial attack; classification confidence score; imperfect DNNs; iterative targeted attacks; model evaluation;
D O I
10.1109/TII.2024.3449999
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, deep neural networks (DNNs) have been widely applied in fault classification tasks. Their adversarial security has received attention, but little consideration has been given to the robustness of adversarial attacks against imperfect DNNs. Owing to the data scarcity and quality deficiencies prevalent in industrial data, the performance of DNNs may be severely constrained. In addition, black-box attacks against industrial fault classification models have difficulty in obtaining sufficient and comprehensive data for constructing surrogate models with perfect decision boundaries. To address this gap, this article analyzes the outcomes of adversarial attacks on imperfect DNNs and categorizes their decision scenarios. Subsequently, building on this analysis, we propose a robust adversarial attack strategy that transforms traditional adversarial attacks into an iterative targeted attack (ITA). The ITA framework begins with an evaluation of DNNs, during which a classification confidence score (CCS) is designed. Using the CCS and the prediction probability of the data, the labels and sequences for targeted attacks are defined. The adversarial attacks are then carried out by iteratively selecting attack targets and using gradient optimization. Experimental results on both a benchmark dataset and an industrial case demonstrate the superiority of the proposed method.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Adversarial Attacks on Deep Neural Networks for Time Series Classification
    Fawaz, Hassan Ismail
    Forestier, Germain
    Weber, Jonathan
    Idoumghar, Lhassane
    Muller, Pierre-Alain
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [2] Grasping Adversarial Attacks on Deep Convolutional Neural Networks for Cholangiocarcinoma Classification
    Diyasa, I. Gede Susrama Mas
    Wahid, Radical Rakhman
    Amiruddin, Brilian Putra
    [J]. 2021 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB 2021), 9TH EDITION, 2021,
  • [3] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    [J]. BMC Medical Imaging, 21
  • [4] Universal adversarial attacks on deep neural networks for medical image classification
    Hirano, Hokuto
    Minagi, Akinori
    Takemoto, Kazuhiro
    [J]. BMC MEDICAL IMAGING, 2021, 21 (01)
  • [5] Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks
    Zheng, Zhihao
    Hong, Pengyu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [6] ROBUST SENSIBLE ADVERSARIAL LEARNING OF DEEP NEURAL NETWORKS FOR IMAGE CLASSIFICATION
    Kim, Jungeum
    Wang, Xiao
    [J]. ANNALS OF APPLIED STATISTICS, 2023, 17 (02): : 961 - 984
  • [7] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [8] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [9] Generalization of Deep Neural Networks for Imbalanced Fault Classification of Machinery Using Generative Adversarial Networks
    Wang, Jinrui
    Li, Shunming
    Han, Baokun
    An, Zenghui
    Bao, Huaiqian
    Ji, Shanshan
    [J]. IEEE ACCESS, 2019, 7 : 111168 - 111180
  • [10] Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
    Garaev, Roman
    Rasheed, Bader
    Khan, Adil Mehmood
    [J]. ALGORITHMS, 2024, 17 (04)