Activation Function Perturbations in Artificial Neural Networks Effects on Robustness

被引:0
|
作者
Sostre, Justin
Cahill, Nathan
Merkel, Cory
机构
关键词
Perturbations; Robustness; Artificial Neural Networks; Error Approximation; SENSITIVITY;
D O I
10.1109/WNYISPW63690.2024.10786498
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial Neural Networks (ANNs) are powerful models that can learn underlying nonlinear structures within data, such as images, sounds, and sentences. However, researchers have found a significant unsolved problem with ANNs: small perturbations in input data or within the network's parameters can cause the network to output incorrect predictions or classifications. This vulnerability becomes even more dangerous as models are loaded onto special-purpose chips and computing devices that may be vulnerable to attackers. To address this issue, we investigate the effects of activation function perturbations using foundational mathematical theory within neural networks. We compare our theoretical results with two feed-forward neural networks trained and evaluated on the MNIST dataset. Our findings suggest that even subtle perturbations in activation functions and parameters can have a significant impact on the performance of ANNs. Our methods are effective at both strengthening and destroying ANNs.
引用
收藏
页数:4
相关论文
共 50 条
  • [11] A low-complexity fuzzy activation function for artificial neural networks
    Soria-Olivas, E
    Martín-Guerrero, JD
    Camps-Valls, G
    Serrano-López, AJ
    Calpe-Maravilla, J
    Gómez-Chova, L
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2003, 14 (06): : 1576 - 1579
  • [12] A Non-Sigmoidal Activation Function for Feedforward Artificial Neural Networks
    Chandra, Pravin
    Ghose, Udayan
    Sood, Apoorvi
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [13] ROBUSTNESS AND PERTURBATION ANALYSIS OF A CLASS OF ARTIFICIAL NEURAL NETWORKS
    WANG, KN
    MICHEL, AN
    NEURAL NETWORKS, 1994, 7 (02) : 251 - 259
  • [14] Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations
    Mohapatra, Jeet
    Weng, Tsui-Wei
    Chen, Pin-Yu
    Liu, Sijia
    Daniel, Luca
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 241 - 249
  • [15] Verifying Attention Robustness of Deep Neural Networks Against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    NASA FORMAL METHODS, NFM 2023, 2023, 13903 : 37 - 61
  • [16] Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 560 - 561
  • [17] A HYBRID OBJECTIVE FUNCTION FOR ROBUSTNESS OF ARTIFICIAL NEURAL NETWORKS-ESTIMATION OF PARAMETERS IN A MECHANICAL SYSTEM
    Sokolowski, Jan
    Schulz, Volker
    Beise, Hans-Peter
    Schroeder, Udo
    ELECTRONIC TRANSACTIONS ON NUMERICAL ANALYSIS, 2022, 56 : 209 - 234
  • [18] Differentiation of neuron types by evolving activation function templates for artificial neural networks
    Mayer, HA
    Schwaiger, R
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 1773 - 1778
  • [19] Dynamic Modification of Activation Function using the Backpropagation Algorithm in the Artificial Neural Networks
    Mercioni, Marina Adriana
    Tiron, Alexandru
    Holban, Stefan
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2019, 10 (04) : 51 - 56
  • [20] Artificial Neural Networks and robustness analysis in landslide susceptibility zonation
    Melchiorre, Caterina
    Matteucci, Matteo
    Remondo, Juan
    2006 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORK PROCEEDINGS, VOLS 1-10, 2006, : 4375 - +