Extreme Learning Machine With Affine Transformation Inputs in an Activation Function

被引:47
|
作者
Cao, Jiuwen [1 ,2 ]
Zhang, Kai [3 ]
Yong, Hongwei [4 ]
Lai, Xiaoping [1 ]
Chen, Badong [5 ]
Lin, Zhiping [6 ]
机构
[1] Hangzhou Dianzi Univ, Key Lab IOT & Informat Fus Technol Zhejiang, Hangzhou 310018, Zhejiang, Peoples R China
[2] Univ Wuppertal, Sch Elect Informat & Media Engn, D-42119 Wuppertal, Germany
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Heilongjiang, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[5] Xi An Jiao Tong Univ, Sch Elect & Informat Engn, Xian 710049, Shaanxi, Peoples R China
[6] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Affine transformation (AT) activation function; classification; extreme learning machine (ELM); maximum entropy; regression; CLASSIFICATION; APPROXIMATION;
D O I
10.1109/TNNLS.2018.2877468
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The extreme learning machine (ELM) has attracted much attention over the past decade due to its fast learning speed and convincing generalization performance. However, there still remains a practical issue to be approached when applying the ELM: the randomly generated hidden node parameters without tuning can lead to the hidden node outputs being nonuniformly distributed, thus giving rise to poor generalization performance. To address this deficiency, a novel activation function with an affine transformation (AT) on its input is introduced into the ELM, which leads to an improved ELM algorithm that is referred to as an AT-ELM in this paper. The scaling and translation parameters of the AT activation function are computed based on the maximum entropy principle in such a way that the hidden layer outputs approximately obey a uniform distribution. Application of the AT-ELM algorithm in nonlinear function regression shows its robustness to the range scaling of the network inputs. Experiments on nonlinear function regression, real-world data set classification, and benchmark image recognition demonstrate better performance for the AT-ELM compared with the original ELM, the regularized ELM, and the kernel ELM. Recognition results on benchmark image data sets also reveal that the AT-ELM outperforms several other state-of-the-art algorithms in general.
引用
收藏
页码:2093 / 2107
页数:15
相关论文
共 50 条
  • [31] Transfer Learning with Affine Model Transformation
    Minami, Shunya
    Fukumizu, Kenji
    Hayashi, Yoshihiro
    Yoshida, Ryo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [32] Flexible ranking extreme learning machine based on matrix-centering transformation
    Chen, Shizhao
    Chen, Kai
    Xu, Chuanfu
    Lan, Long
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [33] Improved extreme learning machine for function approximation by encoding a priori information
    Han, Fei
    Huang, De-Shuang
    [J]. NEUROCOMPUTING, 2006, 69 (16-18) : 2369 - 2373
  • [34] Robust regularized extreme learning machine with asymmetric Huber loss function
    Gupta, Deepak
    Hazarika, Barenya Bikash
    Berlin, Mohanadhas
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (16): : 12971 - 12998
  • [35] Robust regularized extreme learning machine with asymmetric Huber loss function
    Deepak Gupta
    Barenya Bikash Hazarika
    Mohanadhas Berlin
    [J]. Neural Computing and Applications, 2020, 32 : 12971 - 12998
  • [36] Learning to Rank with Extreme Learning Machine
    Zong, Weiwei
    Huang, Guang-Bin
    [J]. NEURAL PROCESSING LETTERS, 2014, 39 (02) : 155 - 166
  • [37] Learning to Rank with Extreme Learning Machine
    Weiwei Zong
    Guang-Bin Huang
    [J]. Neural Processing Letters, 2014, 39 : 155 - 166
  • [38] Application of error minimized extreme learning machine for simultaneous learning of a function and its derivatives
    Balasundaram, S.
    Kapil
    [J]. NEUROCOMPUTING, 2011, 74 (16) : 2511 - 2519
  • [39] A new learning algorithm for function approximation incorporating a priori information into extreme learning machine
    Han, Fei
    Lok, Tat-Ming
    Lyu, Michael R.
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2006, PT 1, 2006, 3971 : 631 - 636
  • [40] Extreme Learning Machine as a Function Approximator: Initialization of Input Weights and Biases
    Dudek, Grzegorz
    [J]. PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON COMPUTER RECOGNITION SYSTEMS, CORES 2015, 2016, 403 : 59 - 69