Bi-modal derivative adaptive activation function sigmoidal feedforward artificial neural networks

被引:8
|
作者
Mishra, Akash [1 ]
Chandra, Pravin [1 ]
Ghose, Udayan [1 ]
Sodhi, Sartaj Singh [1 ]
机构
[1] Guru Gobind Singh Indraprastha Univ, Univ Sch Informat & Commun Technol, Dwarka Sect 16C, Delhi 110078, India
关键词
Activation function adaptation; Bi-modal derivative activation function; Activation function; Resilient backpropagation algorithm; Sigmoidal feed-forward artificial neural; network; APPROXIMATION; FFANNS;
D O I
10.1016/j.asoc.2017.09.002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work an adaptive mechanism for choosing the activation function is proposed and described. Four bi-modal derivative sigmoidal adaptive activation function is used as the activation function at the hidden layer of a single hidden layer sigmoidal feedforward artificial neural networks. These four bi-modal derivative activation functions are grouped as asymmetric and anti-symmetric activation functions (in groups of two each). For the purpose of comparison, the logistic function (an asymmetric function) and the function obtained by subtracting 0.5 from it (an anti-symmetric) function is also used as activation function for the hidden layer nodes'. The resilient backpropagation algorithm with improved weight tracking (iRprop(+)) is used to adapt the parameter of the activation functions and also the weights and/or biases of the sigmoidal feedforward artificial neural networks. The learning tasks used to demonstrate the efficacy and efficiency of the proposed mechanism are 10 function approximation tasks and four real benchmark problems taken from the UCI machine learning repository. The obtained results demonstrate that both for asymmetric as well as anti-symmetric activation usage, the proposed/used adaptive activation functions are demonstratively as good as if not better than the sigmoidal function without any adaptive parameter when used as activation function of the hidden layer nodes. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:983 / 994
页数:12
相关论文
共 50 条
  • [31] Artificial Neural Networks Activation Function HDL Coder
    Namin, Ashkan Hosseinzadeh
    Leboeuf, Karl
    Wu, Huapeng
    Ahmadi, Majid
    2009 IEEE INTERNATIONAL CONFERENCE ON ELECTRO/INFORMATION TECHNOLOGY, 2009, : 387 - 390
  • [32] FPGA Realization of Activation Function for Artificial Neural Networks
    Saichand, Venakata
    Nirmala, Devi M.
    Arumugam., S.
    Mohankumar, N.
    ISDA 2008: EIGHTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, VOL 3, PROCEEDINGS, 2008, : 159 - 164
  • [33] A Hybrid Chaotic Activation Function for Artificial Neural Networks
    Reid, Siobhan
    Ferens, Ken
    ADVANCES IN ARTIFICIAL INTELLIGENCE AND APPLIED COGNITIVE COMPUTING, 2021, : 1097 - 1105
  • [34] Knowledge Fusion in Feedforward Artificial Neural Networks
    Milad I. Akhlaghi
    Sergey V. Sukhov
    Neural Processing Letters, 2018, 48 : 257 - 272
  • [35] Function evaluation with feedforward neural networks
    Logan, D
    Argyrakis, P
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 1998, 67 (1-2) : 201 - 222
  • [36] Knowledge Fusion in Feedforward Artificial Neural Networks
    Akhlaghi, Milad I.
    Sukhov, Sergey V.
    NEURAL PROCESSING LETTERS, 2018, 48 (01) : 257 - 272
  • [37] An adaptive activation function for higher order neural networks
    Xu, SX
    Zhang, M
    AL 2002: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2002, 2557 : 356 - 362
  • [38] Artificial neural networks with adaptive multidimensional spline activation functions
    Solazzi, M
    Uncini, A
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL III, 2000, : 471 - 476
  • [39] Type-2 fuzzy activation function for multilayer feedforward neural networks
    Karaköse, M
    Akin, E
    2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7, 2004, : 3762 - 3767
  • [40] The faulty behavior of feedforward neural networks with hard-limiting activation function
    Tian, ZY
    Lin, TTY
    Yang, SY
    Tong, SB
    NEURAL COMPUTATION, 1997, 9 (05) : 1109 - 1126