1-Norm random vector functional link networks for classification problems

被引:12
|
作者
Hazarika, Barenya Bikash [1 ,2 ]
Gupta, Deepak [1 ]
机构
[1] Natl Inst Technol, Dept Comp Sci & Engn, Jote, Arunachal Prade, India
[2] Koneru Lakshmaiah Educ Fdn, Vaddeswaram, Andhra Pradesh, India
关键词
1; Norm; Single layer feed-forward neural network; Random vector functional link; Sparseness; Classification; EXTREME LEARNING-MACHINE; KERNEL RIDGE-REGRESSION; MULTILAYER FEEDFORWARD NETWORKS; APPROXIMATION; CLASSIFIERS; ALGORITHM; ENSEMBLE;
D O I
10.1007/s40747-022-00668-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a novel random vector functional link (RVFL) formulation called the 1-norm RVFL (1N RVFL) networks, for solving the binary classification problems. The solution to the optimization problem of 1N RVFL is obtained by solving its exterior dual penalty problem using a Newton technique. The 1-norm makes the model robust and delivers sparse outputs, which is the fundamental advantage of this model. The sparse output indicates that most of the elements in the output matrix are zero; hence, the decision function can be achieved by incorporating lesser hidden nodes compared to the conventional RVFL model. 1N RVFL produces a classifier that is based on a smaller number of input features. To put it another way, this method will suppress the neurons in the hidden layer. Statistical analyses have been carried out on several real-world benchmark datasets. The proposed 1N RVFL with two activation functions viz., ReLU and sine are used in this work. The classification accuracies of 1N RVFL are compared with the extreme learning machine (ELM), kernel ridge regression (KRR), RVFL, kernel RVFL (K-RVFL) and generalized Lagrangian twin RVFL (GLTRVFL) networks. The experimental results with comparable or better accuracy indicate the effectiveness and usability of 1N RVFL for solving binary classification problems.
引用
收藏
页码:3505 / 3521
页数:17
相关论文
共 50 条
  • [42] Unconstrained convex minimization based implicit Lagrangian twin random vector Functional-link networks for binary classification (ULTRVFLC)
    Borah, Parashjyoti
    Gupta, Deepak
    APPLIED SOFT COMPUTING, 2019, 81
  • [43] Respiratory Motion Prediction with Random Vector Functional Link (RVFL) Based Neural Networks
    Rasheed, Asad
    Adebisi, A. T.
    Veluvolu, Kalyana C.
    2020 4TH INTERNATIONAL CONFERENCE ON ELECTRICAL, AUTOMATION AND MECHANICAL ENGINEERING, 2020, 1626
  • [44] FPGA Implementation of Random Vector Functional Link Networks based on Elementary Cellular Automata
    Moran, Alejandro
    Canals, Vincent
    Roca, Miquel
    Isern, Eugeni
    Rossello, Josep L.
    2020 XXXV CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS (DCIS), 2020,
  • [45] A Comparison of Consensus Strategies for Distributed Learning of Random Vector Functional-Link Networks
    Fierimonte, Roberto
    Scardapane, Simone
    Panella, Massimo
    Uncini, Aurelio
    ADVANCES IN NEURAL NETWORKS: COMPUTATIONAL INTELLIGENCE FOR ICT, 2016, 54 : 143 - 152
  • [46] Applying 1-norm SVM with squared loss to gene selection for cancer classification
    Li Zhang
    Weida Zhou
    Bangjun Wang
    Zhao Zhang
    Fanzhang Li
    Applied Intelligence, 2018, 48 : 1878 - 1890
  • [47] Applying 1-norm SVM with squared loss to gene selection for cancer classification
    Zhang, Li
    Zhou, Weida
    Wang, Bangjun
    Zhang, Zhao
    Li, Fanzhang
    APPLIED INTELLIGENCE, 2018, 48 (07) : 1878 - 1890
  • [48] Android malware classification based on random vector functional link and artificial Jellyfish Search optimizer
    Elkabbash, Emad T.
    Mostafa, Reham R.
    Barakat, Sherif, I
    PLOS ONE, 2021, 16 (11):
  • [49] Deep Reservoir Computing Based Random Vector Functional Link for Non-sequential Classification
    Hu, Minghui
    Gao, Ruobin
    Suganthan, P. N.
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [50] Exact 1-norm support vector machines via unconstrained convex differentiable minimization
    Mangasarian, Olvi L.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2006, 7 : 1517 - 1530