Kernelized vector quantization in gradient-descent learning

被引:22
|
作者
Villmann, Thomas [1 ]
Haase, Sven [1 ]
Kaden, Marika [1 ]
机构
[1] Univ Appl Sci Mittweida, Computat Intelligence Grp, D-09648 Mittweida, Germany
关键词
Vector quantization; Online learning; Kernel distances; Support vector machines; LVQ; Self-organizing maps; SELF-ORGANIZING MAPS; ASYMPTOTIC LEVEL DENSITY; MAGNIFICATION CONTROL; NEURAL-GAS; ALPHA-BETA; CONVERGENCE; DIVERGENCES; INFORMATION; SPACES; BATCH;
D O I
10.1016/j.neucom.2013.11.048
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prototype based vector quantization is usually proceeded in the Euclidean data space. In the last years, also non-standard metrics became popular. For classification by support vector machines, Hilbert space representations, which are based on so-called kernel metrics, seem to be very successful. In this paper we show that gradient based learning in prototype-based vector quantization is possible by means of kernel metrics instead of the standard Euclidean distance. We will show that an appropriate handling requires differentiable universal kernels defining the feature space metric. This allows a prototype adaptation in the original data space but equipped with a metric determined by the kernel and, therefore, it is isomorphic to respective kernel Hilbert space. However, this approach avoids the Hilbert space representation as known for support vector machines. We give the mathematical justification for the isomorphism and demonstrate the abilities and the usefulness of this approach for several examples including both artificial and real world datasets. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:83 / 95
页数:13
相关论文
共 50 条
  • [1] Semiclassical quantization rising invariant tori: A gradient-descent approach
    Tannenbaum, E
    Heller, EJ
    [J]. JOURNAL OF PHYSICAL CHEMISTRY A, 2001, 105 (12): : 2803 - 2813
  • [2] Kernelized gradient descent method for learning from demonstration
    Hu, Kui
    Zhang, Jiwen
    Wu, Dan
    [J]. NEUROCOMPUTING, 2023, 558
  • [3] Border-Sensitive Learning in Kernelized Learning Vector Quantization
    Kaestner, Marika
    Riedel, Martin
    Strickert, Marc
    Hermann, Wieland
    Villmann, Thomas
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, PT I, 2013, 7902 : 357 - +
  • [4] Generalized Derivative Based Kernelized Learning Vector Quantization
    Schleif, Frank-Michael
    Villmann, Thomas
    Hammer, Barbara
    Schneider, Petra
    Biehl, Michael
    [J]. INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2010, 2010, 6283 : 21 - +
  • [5] Gradient-Descent Quantum Process Tomography by Learning Kraus Operators
    Ahmed, Shahnawaz
    Quijandria, Fernando
    Kockum, Anton Frisk
    [J]. PHYSICAL REVIEW LETTERS, 2023, 130 (15)
  • [6] Learning for hierarchical fuzzy systems based on the gradient-descent method
    Wang, Di
    Zeng, Xiao-Jun
    Keane, John A.
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-5, 2006, : 92 - +
  • [7] Contrastive Learning in Random Neural Networks and its Relation to Gradient-Descent Learning
    Romariz, Alexandre
    Gelenbe, Erol
    [J]. COMPUTER AND INFORMATION SCIENCES II, 2012, : 511 - 517
  • [8] Practical Gradient-Descent for Memristive Crossbars
    Nair, Manu V.
    Dudek, Piotr
    [J]. 2015 INTERNATIONAL CONFERENCE ON MEMRISTIVE SYSTEMS (MEMRISYS), 2015,
  • [9] An implicit gradient-descent procedure for minimax problems
    Essid, Montacer
    Tabak, Esteban G.
    Trigila, Giulio
    [J]. MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2023, 97 (01) : 57 - 89
  • [10] An implicit gradient-descent procedure for minimax problems
    Montacer Essid
    Esteban G. Tabak
    Giulio Trigila
    [J]. Mathematical Methods of Operations Research, 2023, 97 : 57 - 89