Activation Function Perturbations in Artificial Neural Networks Effects on Robustness

被引:0
|
作者
Sostre, Justin
Cahill, Nathan
Merkel, Cory
机构
关键词
Perturbations; Robustness; Artificial Neural Networks; Error Approximation; SENSITIVITY;
D O I
10.1109/WNYISPW63690.2024.10786498
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial Neural Networks (ANNs) are powerful models that can learn underlying nonlinear structures within data, such as images, sounds, and sentences. However, researchers have found a significant unsolved problem with ANNs: small perturbations in input data or within the network's parameters can cause the network to output incorrect predictions or classifications. This vulnerability becomes even more dangerous as models are loaded onto special-purpose chips and computing devices that may be vulnerable to attackers. To address this issue, we investigate the effects of activation function perturbations using foundational mathematical theory within neural networks. We compare our theoretical results with two feed-forward neural networks trained and evaluated on the MNIST dataset. Our findings suggest that even subtle perturbations in activation functions and parameters can have a significant impact on the performance of ANNs. Our methods are effective at both strengthening and destroying ANNs.
引用
收藏
页数:4
相关论文
共 50 条
  • [31] A Review of Activation Function for Artificial Neural Network
    Rasamoelina, Andrinandrasana David
    Adjailia, Fouzia
    Sincak, Peter
    2020 IEEE 18TH WORLD SYMPOSIUM ON APPLIED MACHINE INTELLIGENCE AND INFORMATICS (SAMI 2020), 2020, : 281 - 286
  • [32] Studies of stability and robustness for artificial neural networks and boosted decision trees
    Yang, Hai-Jun
    Roe, Byron P.
    Zhu, Ji
    NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH SECTION A-ACCELERATORS SPECTROMETERS DETECTORS AND ASSOCIATED EQUIPMENT, 2007, 574 (02): : 342 - 349
  • [33] Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
    Karantzas, Nikos
    Besier, Emma
    Caro, Josue Ortega
    Pitkow, Xaq
    Tolias, Andreas S.
    Patel, Ankit B.
    Anselmi, Fabio
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [34] Robustness analysis of a class of discrete-time recurrent neural networks under perturbations
    Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN 46556, United States
    IEEE Trans Circuits Syst I Fundam Theor Appl, 12 (1482-1486):
  • [35] Investigation into the Robustness of Artificial Neural Networks for a Case Study in Civil Engineering
    Shahin, M. A.
    Maier, H. R.
    Jaksa, M. B.
    MODSIM 2005: INTERNATIONAL CONGRESS ON MODELLING AND SIMULATION: ADVANCES AND APPLICATIONS FOR MANAGEMENT AND DECISION MAKING: ADVANCES AND APPLICATIONS FOR MANAGEMENT AND DECISION MAKING, 2005, : 79 - 83
  • [36] Robustness analysis of a class of discrete-time recurrent neural networks under perturbations
    Feng, ZS
    Michel, AN
    PROCEEDINGS OF THE 1998 AMERICAN CONTROL CONFERENCE, VOLS 1-6, 1998, : 53 - 57
  • [37] Robustness analysis of a class of discrete-time recurrent neural networks under perturbations
    Feng, ZS
    Michel, AN
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-FUNDAMENTAL THEORY AND APPLICATIONS, 1999, 46 (12): : 1482 - 1486
  • [38] Adaptive basis function for artificial neural networks
    Philip, NS
    Joseph, KB
    NEUROCOMPUTING, 2002, 47 : 21 - 34
  • [39] Function approximation using artificial neural networks
    Zainuddin, Zarita
    Pauline, Ong
    APPLIED MATHEMATICS FOR SCIENCE AND ENGINEERING, 2007, : 140 - +
  • [40] Neural networks with asymmetric activation function for function approximation
    Gomes, Gecynalda S. da S.
    Ludermir, Teresa B.
    Almeida, Leandro M.
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 2310 - 2317