Activation Function Perturbations in Artificial Neural Networks Effects on Robustness

被引:0
|
作者
Sostre, Justin
Cahill, Nathan
Merkel, Cory
机构
关键词
Perturbations; Robustness; Artificial Neural Networks; Error Approximation; SENSITIVITY;
D O I
10.1109/WNYISPW63690.2024.10786498
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial Neural Networks (ANNs) are powerful models that can learn underlying nonlinear structures within data, such as images, sounds, and sentences. However, researchers have found a significant unsolved problem with ANNs: small perturbations in input data or within the network's parameters can cause the network to output incorrect predictions or classifications. This vulnerability becomes even more dangerous as models are loaded onto special-purpose chips and computing devices that may be vulnerable to attackers. To address this issue, we investigate the effects of activation function perturbations using foundational mathematical theory within neural networks. We compare our theoretical results with two feed-forward neural networks trained and evaluated on the MNIST dataset. Our findings suggest that even subtle perturbations in activation functions and parameters can have a significant impact on the performance of ANNs. Our methods are effective at both strengthening and destroying ANNs.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Parabola As an Activation Function of Artificial Neural Networks
    Khachumov, M. V.
    Emelyanova, Yu. G.
    SCIENTIFIC AND TECHNICAL INFORMATION PROCESSING, 2024, 51 (05) : 471 - 477
  • [2] A novel type of activation function in artificial neural networks: Trained activation function
    Ertugrul, Omer Faruk
    NEURAL NETWORKS, 2018, 99 : 148 - 157
  • [3] Stochastic Implementation of the Activation Function for Artificial Neural Networks
    Yeo, Injune
    Gi, Sang-gyun
    Lee, Byung-geun
    Chu, Myonglae
    PROCEEDINGS OF 2016 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS), 2016, : 440 - 443
  • [4] Artificial Neural Networks Activation Function HDL Coder
    Namin, Ashkan Hosseinzadeh
    Leboeuf, Karl
    Wu, Huapeng
    Ahmadi, Majid
    2009 IEEE INTERNATIONAL CONFERENCE ON ELECTRO/INFORMATION TECHNOLOGY, 2009, : 387 - 390
  • [5] FPGA Realization of Activation Function for Artificial Neural Networks
    Saichand, Venakata
    Nirmala, Devi M.
    Arumugam., S.
    Mohankumar, N.
    ISDA 2008: EIGHTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, VOL 3, PROCEEDINGS, 2008, : 159 - 164
  • [6] A Hybrid Chaotic Activation Function for Artificial Neural Networks
    Reid, Siobhan
    Ferens, Ken
    ADVANCES IN ARTIFICIAL INTELLIGENCE AND APPLIED COGNITIVE COMPUTING, 2021, : 1097 - 1105
  • [7] Robustness of Biologically Grounded Neural Networks Against Image Perturbations
    Teichmann, Michael
    Larisch, Rene
    Hamker, Fred H.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT X, 2024, 15025 : 220 - 230
  • [8] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [9] Formalizing Generalization and Adversarial Robustness of Neural Networks to Weight Perturbations
    Tsai, Yu-Lin
    Hsu, Chia-Yi
    Yu, Chia-Mu
    Chen, Pin-Yu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Robustness for Stability and Stabilization of Boolean Networks With Stochastic Function Perturbations
    Li, Haitao
    Yang, Xinrong
    Wang, Shuling
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (03) : 1231 - 1237