Robust Adversarial Attacks on Imperfect Deep Neural Networks in Fault Classification

被引:0
|
作者
Jiang, Xiaoyu [1 ,2 ]
Kong, Xiangyin [2 ]
Zheng, Junhua [3 ]
Ge, Zhiqiang [4 ]
Zhang, Xinmin [2 ]
Song, Zhihuan [1 ,2 ]
机构
[1] Guangdong Univ Petrochem Technol, Sch Automat, Maoming 525000, Peoples R China
[2] Zhejiang Univ, Coll Control Sci & Engn, State Key Lab Ind Control Technol, Hangzhou 310027, Peoples R China
[3] Zhejiang Univ Sci & Technol, Sch Automat & Elect Engn, Hangzhou 310023, Peoples R China
[4] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Artificial neural networks; Perturbation methods; Closed box; Predictive models; Iterative methods; Indexes; Adversarial attack; classification confidence score; imperfect DNNs; iterative targeted attacks; model evaluation;
D O I
10.1109/TII.2024.3449999
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, deep neural networks (DNNs) have been widely applied in fault classification tasks. Their adversarial security has received attention, but little consideration has been given to the robustness of adversarial attacks against imperfect DNNs. Owing to the data scarcity and quality deficiencies prevalent in industrial data, the performance of DNNs may be severely constrained. In addition, black-box attacks against industrial fault classification models have difficulty in obtaining sufficient and comprehensive data for constructing surrogate models with perfect decision boundaries. To address this gap, this article analyzes the outcomes of adversarial attacks on imperfect DNNs and categorizes their decision scenarios. Subsequently, building on this analysis, we propose a robust adversarial attack strategy that transforms traditional adversarial attacks into an iterative targeted attack (ITA). The ITA framework begins with an evaluation of DNNs, during which a classification confidence score (CCS) is designed. Using the CCS and the prediction probability of the data, the labels and sequences for targeted attacks are defined. The adversarial attacks are then carried out by iteratively selecting attack targets and using gradient optimization. Experimental results on both a benchmark dataset and an industrial case demonstrate the superiority of the proposed method.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Adversarial Evasion Attacks to Deep Neural Networks in ECR Models
    Nemoto, Shota
    Rajapaksha, Subhash
    Perouli, Despoina
    [J]. HEALTHINF: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES - VOL 5: HEALTHINF, 2021, : 135 - 141
  • [22] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [23] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    [J]. IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [24] Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
    Luo, Bo
    Liu, Yannan
    Wei, Lingxiao
    Xu, Qiang
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 1652 - 1659
  • [25] Robust convolutional neural networks against adversarial attacks on medical images
    Shi, Xiaoshuang
    Peng, Yifan
    Chen, Qingyu
    Keenan, Tiarnan
    Thavikulwat, Alisa T.
    Lee, Sungwon
    Tang, Yuxing
    Chew, Emily Y.
    Summers, Ronald M.
    Lu, Zhiyong
    [J]. PATTERN RECOGNITION, 2022, 132
  • [26] Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification
    Koga, Kazuki
    Takemoto, Kazuhiro
    [J]. ALGORITHMS, 2022, 15 (05)
  • [27] Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses
    Xu, Yonghao
    Du, Bo
    Zhang, Liangpei
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (02): : 1604 - 1617
  • [28] Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification
    Alzaidy, Sharoug
    Binsalleeh, Hamad
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [29] Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
    Smagulova, Kamilya
    Bacha, Lina
    Fouda, Mohammed E.
    Kanj, Rouwaida
    Eltawil, Ahmed
    [J]. ELECTRONICS, 2024, 13 (03)
  • [30] Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
    Sardar, Nida
    Khan, Sundas
    Hintze, Arend
    Mehra, Priyanka
    [J]. ENTROPY, 2023, 25 (06)