Feedforward kernel neural networks, generalized least learning machine, and its deep learning with application to image classification

被引:43
|
作者
Wang, Shitong [1 ,2 ]
Jiang, Yizhang [1 ,2 ]
Chung, Fu-Lai [2 ]
Qian, Pengjiang [1 ]
机构
[1] Jiangnan Univ, Sch Digital Media, Wuxi, Jiangsu, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Feedforward kernel neural networks; Least learning machine; Kernel principal component analysis (KPCA); Hidden-layer-tuning-free learning; Deep architecture and learning; REGRESSION; REPRESENTATION; APPROXIMATION; SHAPE;
D O I
10.1016/j.asoc.2015.07.040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, the architecture of feedforward kernel neural networks (FKNN) is proposed, which can include a considerably large family of existing feedforward neural networks and hence can meet most practical requirements. Different from the common understanding of learning, it is revealed that when the number of the hidden nodes of every hidden layer and the type of the adopted kernel based activation functions are pre-fixed, a special kernel principal component analysis (KPCA) is always implicitly executed, which can result in the fact that all the hidden layers of such networks need not be tuned and their parameters can be randomly assigned and even may be independent of the training data. Therefore, the least learning machine (LLM) is extended into its generalized version in the sense of adopting much more error functions rather than mean squared error (MSE) function only. As an additional merit, it is also revealed that rigorous Mercer kernel condition is not required in FKNN networks. When the proposed architecture of FKNN networks is constructed in a layer-by-layer way, i.e., the number of the hidden nodes of every hidden layer may be determined only in terms of the extracted principal components after the explicit execution of a KPCA, we can develop FKNN's deep architecture such that its deep learning framework (DLF) has strong theoretical guarantee. Our experimental results about image classification manifest that the proposed FKNN's deep architecture and its DLF based learning indeed enhance the classification performance. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:125 / 141
页数:17
相关论文
共 50 条
  • [1] The Quantum Path Kernel: A Generalized Neural Tangent Kernel for Deep Quantum Machine Learning
    Incudini, Massimiliano
    Grossi, Michele
    Mandarino, Antonio
    Vallecorsa, Sofia
    Pierro, Alessandra Di
    Windridge, David
    [J]. IEEE Transactions on Quantum Engineering, 2023, 4
  • [2] Application of Neural Networks and Machine Learning in Image Recognition
    Gali, Dario
    Stojanovi, Zvezdan
    Caji, Elvir
    [J]. TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2024, 31 (01): : 316 - 323
  • [3] Deep Learning and its Application to general Image Classification
    Liu, Po-Hsien
    Su, Shun-Feng
    Chen, Ming-Chang
    Hsiao, Chih-Ching
    [J]. 2015 INTERNATIONAL CONFERENCE ON INFORMATIVE AND CYBERNETICS FOR COMPUTATIONAL SOCIAL SYSTEMS (ICCSS), 2015, : 7 - 10
  • [4] A Transfer Learning Evaluation of Deep Neural Networks for Image Classification
    Abou Baker, Nermeen
    Zengeler, Nico
    Handmann, Uwe
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2022, 4 (01): : 22 - 41
  • [5] Extreme learning machine: A new learning scheme of feedforward neural networks
    Huang, GB
    Zhu, QY
    Siew, CK
    [J]. 2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 985 - 990
  • [6] Maize Kernel Abortion Recognition and Classification Using Binary Classification Machine Learning Algorithms and Deep Convolutional Neural Networks
    Chipindu, Lovemore
    Mupangwa, Walter
    Mtsilizah, Jihad
    Nyagumbo, Isaiah
    Zaman-Allah, Mainassara
    [J]. AI, 2020, 1 (03) : 361 - 375
  • [7] All-Transfer Learning for Deep Neural Networks and Its Application to Sepsis Classification
    Sawada, Yoshihide
    Sato, Yoshikuni
    Nakada, Toru
    Ujimoto, Kei
    Hayashi, Nobuhiro
    [J]. ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, 285 : 1586 - 1587
  • [8] Introduction to Machine Learning, Neural Networks, and Deep Learning
    Choi, Rene Y.
    Coyner, Aaron S.
    Kalpathy-Cramer, Jayashree
    Chiang, Michael F.
    Campbell, J. Peter
    [J]. TRANSLATIONAL VISION SCIENCE & TECHNOLOGY, 2020, 9 (02):
  • [9] IMAGE CLASSIFICATION USING CONVOLUTIONAL NEURAL NETWORKS AND KERNEL EXTREME LEARNING MACHINES
    Li, Zhuangzi
    Zhu, Xiaobin
    Wang, Lei
    Guo, Peiyu
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 3009 - 3013
  • [10] Fusion of Deep Learning Architectures, Multilayer Feedforward Networks and Learning Vector Quantizers for Deep Classification Learning
    Villmann, T.
    Biehl, M.
    Villmann, A.
    Saralajew, S.
    [J]. 2017 12TH INTERNATIONAL WORKSHOP ON SELF-ORGANIZING MAPS AND LEARNING VECTOR QUANTIZATION, CLUSTERING AND DATA VISUALIZATION (WSOM), 2017, : 69 - 76