Robustness of Neural Ensembles Against Targeted and Random Adversarial Learning

被引:0
|
作者
Wang, Shir Li [1 ]
Shafi, Kamran [1 ]
Lokan, Chris [1 ]
Abbass, Hussein A. [1 ]
机构
[1] Univ New S Wales, Sch SEIT UNSW ADFA, Sydney, NSW 2052, Australia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning has become a prominent tool in various domains owing to its adaptability. However, this adaptability can be taken advantage of by an adversary to cause dysfunction of machine learning; a process known as Adversarial Learning. This paper investigates Adversarial Learning in the context of artificial neural networks. The aim is to test the hypothesis that an ensemble of neural networks trained on the same data manipulated by an adversary would be more robust than a single network. We investigate two attack types: targeted and random. We use Mahalanobis distance and covariance matrices to selected targeted attacks. The experiments use both artificial and UCI datasets. The results demonstrate that an ensemble of neural networks trained on attacked data are more robust against the attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Adversarial Robustness Guarantees for Random Deep Neural Networks
    De Palma, Giacomo
    Kiani, Bobak T.
    Lloyd, Seth
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [2] On the Robustness of Randomized Ensembles to Adversarial Perturbations
    Dbouk, Hassan
    Shanbhag, Naresh R.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [3] Training Neural Networks with Random Noise Images for Adversarial Robustness
    Park, Ji-Young
    Liu, Lin
    Li, Jiuyong
    Liu, Jixue
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3358 - 3362
  • [4] Random Separation Learning for Neural Network Ensembles
    Liu, Yong
    [J]. 2017 10TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI), 2017,
  • [5] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    [J]. 2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [6] Understanding Generalization in Neural Networks for Robustness against Adversarial Vulnerabilities
    Chaudhury, Subhajit
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13714 - 13715
  • [7] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [8] ROBUSTNESS AGAINST RANDOM DILUTION IN ATTRACTOR NEURAL NETWORKS
    KOMODA, A
    SERNEELS, R
    WONG, KYM
    BOUTEN, M
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1991, 24 (13): : L743 - L749
  • [9] Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training
    Li, Chaofei
    Zhu, Ziyuan
    Niu, Ruicheng
    Zhao, Yuting
    [J]. COMPUTERS & SECURITY, 2024, 143
  • [10] Adversarial learning: The impact of statistical sample selection techniques on neural ensembles
    Wang S.L.
    Shafi K.
    Lokan C.
    Abbass H.A.
    [J]. Evolving Systems, 2010, 1 (03) : 181 - 197