Activation Function Perturbations in Artificial Neural Networks Effects on Robustness

被引:0
|
作者
Sostre, Justin
Cahill, Nathan
Merkel, Cory
机构
关键词
Perturbations; Robustness; Artificial Neural Networks; Error Approximation; SENSITIVITY;
D O I
10.1109/WNYISPW63690.2024.10786498
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial Neural Networks (ANNs) are powerful models that can learn underlying nonlinear structures within data, such as images, sounds, and sentences. However, researchers have found a significant unsolved problem with ANNs: small perturbations in input data or within the network's parameters can cause the network to output incorrect predictions or classifications. This vulnerability becomes even more dangerous as models are loaded onto special-purpose chips and computing devices that may be vulnerable to attackers. To address this issue, we investigate the effects of activation function perturbations using foundational mathematical theory within neural networks. We compare our theoretical results with two feed-forward neural networks trained and evaluated on the MNIST dataset. Our findings suggest that even subtle perturbations in activation functions and parameters can have a significant impact on the performance of ANNs. Our methods are effective at both strengthening and destroying ANNs.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] Increasing innate robustness in artificial neural networks using redundancy
    Univ of Reading, Reading, United Kingdom
    Electron Lett, 22 (1931-1932):
  • [22] Speech recognition in real world: Artificial neural networks and robustness
    Kabre, H
    WAVELET APPLICATIONS IV, 1997, 3078 : 175 - 181
  • [23] INCREASING INNATE ROBUSTNESS IN ARTIFICIAL NEURAL NETWORKS USING REDUNDANCY
    THOMPSON, MP
    KAMBHAMPATI, C
    ELECTRONICS LETTERS, 1995, 31 (22) : 1931 - 1932
  • [24] On the robustness of global exponential stability for hybrid neural networks with noise and delay perturbations
    Feng Jiang
    Hua Yang
    Yi Shen
    Neural Computing and Applications, 2014, 24 : 1497 - 1504
  • [25] On the robustness of global exponential stability for hybrid neural networks with noise and delay perturbations
    Jiang, Feng
    Yang, Hua
    Shen, Yi
    NEURAL COMPUTING & APPLICATIONS, 2014, 24 (7-8): : 1497 - 1504
  • [26] Periodic Function as Activation Function for Neural Networks
    Xu, Ding
    Guan, Yue
    Cai, Ping-ping
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE: TECHNIQUES AND APPLICATIONS, AITA 2016, 2016, : 179 - 183
  • [27] Laplacian networks: bounding indicator function smoothness for neural networks robustness
    Lassance, Carlos
    Gripon, Vincent
    Ortega, Antonio
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2021, 10
  • [28] A Comparison of Activation Functions in Artificial Neural Networks
    Bircanoglu, Cenk
    Arica, Nafiz
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [29] Voltage-to-Voltage Sigmoid Neuron Activation Function Design for Artificial Neural Networks
    Moposita, Tatiana
    Trojman, Lionel
    Crupi, Felice
    Lanuzza, Marco
    Vladimirescu, Andrei
    2022 IEEE 13TH LATIN AMERICAN SYMPOSIUM ON CIRCUITS AND SYSTEMS (LASCAS), 2022, : 164 - 167
  • [30] Analyzing Forward Robustness of Feedforward Deep Neural Networks with LeakyReLU Activation Function Through Symbolic Propagation
    Masetti, Giulio
    Di Giandomenico, Felicita
    ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 460 - 474