Empirical kernel map-based multilayer extreme learning machines for representation learning

被引:27
|
作者
Chi-Man Vong [1 ]
Chen, Chuangquan [1 ]
Wong, Pak-Kin [2 ]
机构
[1] Univ Macau, Dept Comp & Informat Sci, Macau, Peoples R China
[2] Univ Macau, Dept Electromech Engn, Macau, Peoples R China
关键词
Kernel learning; Multilayer extreme learning machine (ML-ELM); Empirical kernel map (EKM); Representation learning; stacked autoencoder (SAE);
D O I
10.1016/j.neucom.2018.05.032
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, multilayer extreme learning machine (ML-ELM) and hierarchical extreme learning machine (HELM) were developed for representation learning whose training time can be reduced from hours to seconds compared to traditional stacked autoencoder (SAE). However, there are three practical issues in ML-ELM and H-ELM: (1) the random projection in every layer leads to unstable and suboptimal performance; (2) the manual tuning of the number of hidden nodes in every layer is time-consuming; and (3) under large hidden layer, the training time becomes relatively slow and a large storage is necessary. More recently, issues (1) and (2) have been resolved by kernel method, namely, multilayer kernel ELM (ML-KELM), which encodes the hidden layer in form of a kernel matrix (computed by using kernel function on the input data), but the storage and computation issues for kernel matrix pose a big challenge in large-scale application. In this paper, we empirically show that these issues can be alleviated by encoding the hidden layer in the form of an approximate empirical kernel map (EKM) computed from low-rank approximation of the kernel matrix. This proposed method is called ML-EKM-ELM, whose contributions are: (1) stable and better performance is achieved under no random projection mechanism; (2) the exhaustive manual tuning on the number of hidden nodes in every layer is eliminated; (3) EKM is scalable and produces a much smaller hidden layer for fast training and low memory storage, thereby suitable for large-scale problems. Experimental results on benchmark datasets demonstrated the effectiveness of the proposed ML-EKM-ELM. As an illustrative example, on the NORB dataset, ML-EKM-ELM can be respectively up to 16 times and 37 times faster than ML-KELM for training and testing with a little loss of accuracy of 0.35%, while the memory storage can be reduced up to 1/9. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:265 / 276
页数:12
相关论文
共 50 条
  • [1] Kernel-Based Multilayer Extreme Learning Machines for Representation Learning
    Wong, Chi Man
    Vong, Chi Man
    Wong, Pak Kin
    Cao, Jiuwen
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (03) : 757 - 762
  • [2] Approximate empirical kernel map-based iterative extreme learning machine for clustering
    Chuangquan Chen
    Chi-Man Vong
    Pak-Kin Wong
    Keng-Iam Tai
    [J]. Neural Computing and Applications, 2020, 32 : 8031 - 8046
  • [3] Approximate empirical kernel map-based iterative extreme learning machine for clustering
    Chen, Chuangquan
    Vong, Chi-Man
    Wong, Pak-Kin
    Tai, Keng-Iam
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (12): : 8031 - 8046
  • [4] Enhanced Kernel-Based Multilayer Fuzzy Weighted Extreme Learning Machines
    Wang, Yang
    Wang, An-Na
    Ai, Qing
    Sun, Hai-Jing
    [J]. IEEE ACCESS, 2020, 8 : 166246 - 166260
  • [5] Data representation in kernel based learning machines
    Ancona, N
    Maglietta, R
    Stella, E
    [J]. Proceedings of the Eighth IASTED International Conference on Artificial Intelligence and Soft Computing, 2004, : 243 - 248
  • [6] Correction to: Deep kernel learning in extreme learning machines
    A. L. Afzal
    Nikhitha K. Nair
    S. Asharaf
    [J]. Pattern Analysis and Applications, 2021, 24 (1) : 21 - 21
  • [7] Deep Representation Based on Multilayer Extreme Learning Machine
    Qi, Ya-Li
    Li, Ye-Li
    [J]. ELECTRONICS, COMMUNICATIONS AND NETWORKS V, 2016, 382 : 147 - 152
  • [8] Mixture Correntropy-Based Kernel Extreme Learning Machines
    Zheng, Yunfei
    Chen, Badong
    Wang, Shiyuan
    Wang, Weiqun
    Qin, Wei
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (02) : 811 - 825
  • [9] Representation learning with extreme learning machines and empirical mode decomposition for wind speed forecasting methods
    Yang, Hao-Fan
    Chen, Yi-Ping Phoebe
    [J]. ARTIFICIAL INTELLIGENCE, 2019, 277
  • [10] Correntropy-based robust multilayer extreme learning machines
    Chen Liangjun
    Honeine, Paul
    Hua, Qu
    Zhao Jihong
    Xia, Sun
    [J]. PATTERN RECOGNITION, 2018, 84 : 357 - 370