Deep multilayer multiple kernel learning

被引:0
|
作者
Ilyes Rebai
Yassine BenAyed
Walid Mahdi
机构
[1] University of Sfax,MIRACL: Multimedia InfoRmation System and Advanced Computing Laboratory
[2] Taif University,College of Computers and Information Technology
来源
关键词
Deep learning; Support vector machine; Multilayer multiple kernel learning; Optimization methods; Gradient ascent;
D O I
暂无
中图分类号
学科分类号
摘要
Multiple kernel learning (MKL) approach has been proposed for kernel methods and has shown high performance for solving some real-world applications. It consists on learning the optimal kernel from one layer of multiple predefined kernels. Unfortunately, this approach is not rich enough to solve relatively complex problems. With the emergence and the success of the deep learning concept, multilayer of multiple kernel learning (MLMKL) methods were inspired by the idea of deep architecture. They are introduced in order to improve the conventional MKL methods. Such architectures tend to learn deep kernel machines by exploring the combinations of multiple kernels in a multilayer structure. However, existing MLMKL methods often have trouble with the optimization of the network for two or more layers. Additionally, they do not always outperform the simplest method of combining multiple kernels (i.e., MKL). In order to improve the effectiveness of MKL approaches, we introduce, in this paper, a novel backpropagation MLMKL framework. Specifically, we propose to optimize the network over an adaptive backpropagation algorithm. We use the gradient ascent method instead of dual objective function, or the estimation of the leave-one-out error. We test our proposed method through a large set of experiments on a variety of benchmark data sets. We have successfully optimized the system over many layers. Empirical results over an extensive set of experiments show that our algorithm achieves high performance compared to the traditional MKL approach and existing MLMKL methods.
引用
收藏
页码:2305 / 2314
页数:9
相关论文
共 50 条
  • [21] Multiple kernel learning by empirical target kernel
    Wang, Peiyan
    Cai, Dongfeng
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2020, 18 (02)
  • [22] Multiple Instance Learning via Multiple Kernel Learning
    Yang, Bing
    Li, Qian
    Jing, Ling
    Zhen, Ling
    OPERATIONS RESEARCH AND ITS APPLICATIONS, 2010, 12 : 160 - 167
  • [23] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Junyu Xuan
    Jie Lu
    Zheng Yan
    Guangquan Zhang
    International Journal of Computational Intelligence Systems, 2018, 12 : 164 - 171
  • [24] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Xuan, Junyu
    Lu, Jie
    Yan, Zheng
    Zhang, Guangquan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2019, 12 (01) : 164 - 171
  • [25] A predictive intelligence system of credit scoring based on deep multiple kernel learning
    Wu, Cheng-Feng
    Huang, Shian-Chang
    Chiou, Chei-Chang
    Wang, Yu-Min
    APPLIED SOFT COMPUTING, 2021, 111 (111)
  • [26] LSTWSVM fusion of deep feature and multiple kernel learning and its industrial applications
    Liu Y.
    Liu D.-Y.
    Lv Z.
    Zhao J.
    Wang W.
    Kongzhi yu Juece/Control and Decision, 2024, 39 (08): : 2622 - 2630
  • [27] Self-Adaptive Deep Multiple Kernel Learning Based on Rademacher Complexity
    Ren, Shengbing
    Shen, Wangbo
    Siddique, Chaudry Naeem
    Li, You
    SYMMETRY-BASEL, 2019, 11 (03):
  • [28] Deep kernel learning in extreme learning machines
    Afzal, A. L.
    Nair, Nikhitha K.
    Asharaf, S.
    PATTERN ANALYSIS AND APPLICATIONS, 2021, 24 (01) : 11 - 19
  • [29] SPARSITY IN MULTIPLE KERNEL LEARNING
    Koltchinskii, Vladimir
    Yuan, Ming
    ANNALS OF STATISTICS, 2010, 38 (06): : 3660 - 3695
  • [30] Multiple Kernel Learning Algorithms
    Gonen, Mehmet
    Alpaydin, Ethem
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 2211 - 2268