Robustness of Neural Ensembles Against Targeted and Random Adversarial Learning

被引:0
|
作者
Wang, Shir Li [1 ]
Shafi, Kamran [1 ]
Lokan, Chris [1 ]
Abbass, Hussein A. [1 ]
机构
[1] Univ New S Wales, Sch SEIT UNSW ADFA, Sydney, NSW 2052, Australia
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning has become a prominent tool in various domains owing to its adaptability. However, this adaptability can be taken advantage of by an adversary to cause dysfunction of machine learning; a process known as Adversarial Learning. This paper investigates Adversarial Learning in the context of artificial neural networks. The aim is to test the hypothesis that an ensemble of neural networks trained on the same data manipulated by an adversary would be more robust than a single network. We investigate two attack types: targeted and random. We use Mahalanobis distance and covariance matrices to selected targeted attacks. The experiments use both artificial and UCI datasets. The results demonstrate that an ensemble of neural networks trained on attacked data are more robust against the attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Lim, Hyuntak
    Roh, Si-Dong
    Park, Sangki
    Chung, Ki-Seok
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [42] A concealed poisoning attack to reduce deep neural networks' robustness against adversarial samples
    Zheng, Junhao
    Chan, Patrick P. K.
    Chi, Huiyang
    He, Zhimin
    INFORMATION SCIENCES, 2022, 615 : 758 - 773
  • [43] On the Robustness of Metric Learning: An Adversarial Perspective
    Huai, Mengdi
    Zheng, Tianhang
    Miao, Chenglin
    Yao, Liuyi
    Zhang, Aidong
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2022, 16 (05)
  • [44] Learning Discriminative Features for Adversarial Robustness
    Hosler, Ryan
    Phillips, Tyler
    Yu, Xiaoyuan
    Sundar, Agnideven
    Zou, Xukai
    Li, Feng
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 303 - 310
  • [45] Jacobian Ensembles Improve Robustness Trade-Offs to Adversarial Attacks
    Co, Kenneth T.
    Martinez-Rego, David
    Hau, Zhongyuan
    Lupu, Emil C.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT III, 2022, 13531 : 680 - 691
  • [46] Delving into the Adversarial Robustness of Federated Learning
    Zhang, Jie
    Li, Bo
    Chen, Chen
    Lyu, Lingjuan
    Wu, Shuang
    Ding, Shouhong
    Wu, Chao
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11245 - 11253
  • [47] Jacobian Ensembles Improve Robustness Trade-Offs to Adversarial Attacks
    Co, Kenneth T.
    Martinez-Rego, David
    Hau, Zhongyuan
    Lupu, Emil C.
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13531 LNCS : 680 - 691
  • [48] On Adversarial Robustness: A Neural Architecture Search perspective
    Devaguptapu, Chaitanya
    Agarwal, Devansh
    Mittal, Gaurav
    Gopalani, Pulkit
    Balasubramanian, Vineeth N.
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 152 - 161
  • [49] Adversarial Robustness Certification for Bayesian Neural Networks
    Wicker, Matthew
    Platzer, Andre
    Laurenti, Luca
    Kwiatkowska, Marta
    FORMAL METHODS, PT I, FM 2024, 2025, 14933 : 3 - 28
  • [50] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)