Multi-view Representation Induced Kernel Ensemble Support Vector Machine

被引:0
|
作者
Quayson, Ebenezer [1 ,2 ]
Ganaa, Ernest Domanaanmwi [3 ]
Zhu, Qian [1 ]
Shen, Xiang-Jun [1 ]
机构
[1] Jiangsu Univ, Sch Comp Sci & Commun Engn, Zhenjiang 212013, Peoples R China
[2] Univ Energy & Nat Resources, Dept Comp Sci & Informat, POB 214, Sunyani, Ghana
[3] Hilla Limann Tech Univ, Sch Appl Sci & Technol, POB 553, Wa, Ghana
关键词
Multiple kernel learning; Multiview data modelling; Ensemble model; Kernel classification; TEXT CATEGORIZATION; GENE-EXPRESSION; NEURAL-NETWORKS; SVM; CLASSIFICATION; RECOGNITION; DEEP; SELECTION; MARGIN; SPACE;
D O I
10.1007/s11063-023-11250-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a multi-view representation kernel ensemble Support Vector Machine. Unlike the conventional multiple kernel learning techniques which utilizes a common similarity measure over the entire input space, with the aim of solely learning their models via linear combination of basis kernels in single Reproducing Kernel Hilbert Space (RKHS), our proposed model seeks to concurrently find multiple solutions in corresponding Reproducing Kernel Hilbert Spaces. To achieve this objective, we first derive our proposed model directly from the classical SVM model. Then, leveraging on the concept of multi- view data processing, we consider the original data as a multi - view data controlled by different sub models in our proposed model. The multi-view representations of the original data are subsequently transformed into ensemble kernel models where the linear classifiers are parameterized in multiple kernel spaces. This enables each model to co-optimize the learning of its optimal parameter via the minimization of a cumulative ensemble loss in multiple RKHSs. With this, there is an overall improvement in the accuracy of the classification task as well as the robustness of our proposed ensemble model. Since UCI machine learning data repository provides publicly available benchmark datasets, we evaluated our model by conducting experiments on several UCI classification and image datasets. The results of our proposed model were compared with other state-of-the-art MKL methods, such as SimpleMKL, EasyMKL, MRMKL, RMKL and PWMK. Among these MKL methods, our proposed method demonstrates better performances in the experiments conducted.
引用
收藏
页码:7035 / 7056
页数:22
相关论文
共 50 条
  • [21] Multi-view Laplacian twin support vector machines
    Xijiong Xie
    Shiliang Sun
    Applied Intelligence, 2014, 41 : 1059 - 1068
  • [22] Multi-view Laplacian twin support vector machines
    Xie, Xijiong
    Sun, Shiliang
    APPLIED INTELLIGENCE, 2014, 41 (04) : 1059 - 1068
  • [23] A kernel machine based approach for multi-view face recognition
    Lu, JW
    Plataniotis, KN
    Venetsanopoulos, AN
    2002 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL I, PROCEEDINGS, 2002, : 265 - 268
  • [24] Multi-view support vector machines with sub-view learning
    Hao, Qi
    Zheng, Wenguang
    Xiao, Yingyuan
    Zhu, Wenxin
    SOFT COMPUTING, 2023, 27 (10) : 6241 - 6259
  • [25] Multi-view support vector machines with sub-view learning
    Qi Hao
    Wenguang Zheng
    Yingyuan Xiao
    Wenxin Zhu
    Soft Computing, 2023, 27 : 6241 - 6259
  • [26] Sparse representation for kernel function selection of support vector machine
    Liang L.
    Wu J.
    Zhong Z.
    Zhu S.
    International Journal of Autonomous and Adaptive Communications Systems, 2018, 11 (01)
  • [27] Multi-view kernel construction
    de Sa, Virginia R.
    Gallagher, Patrick W.
    Lewis, Joshua M.
    Malave, Vicente L.
    MACHINE LEARNING, 2010, 79 (1-2) : 47 - 71
  • [28] Multi-view kernel completion
    Sahely Bhadra
    Samuel Kaski
    Juho Rousu
    Machine Learning, 2017, 106 : 713 - 739
  • [29] Multi-view kernel completion
    Bhadra, Sahely
    Kaski, Samuel
    Rousu, Juho
    MACHINE LEARNING, 2017, 106 (05) : 713 - 739
  • [30] Multi-view kernel construction
    Virginia R. de Sa
    Patrick W. Gallagher
    Joshua M. Lewis
    Vicente L. Malave
    Machine Learning, 2010, 79 : 47 - 71